Test Report: Docker_Linux_docker_arm64 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75.8
x
+
TestAddons/parallel/Registry (75.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.133137ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8c8sz" [48080209-95a2-4f92-83d3-4a339a6b1b54] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.107355039s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fvmkn" [34be177b-c148-4a04-9275-afdde27c3678] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003922871s
addons_test.go:338: (dbg) Run:  kubectl --context addons-850577 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-850577 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-850577 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.116407871s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-850577 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 ip
2024/09/20 18:08:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-850577
helpers_test.go:235: (dbg) docker inspect addons-850577:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1",
	        "Created": "2024-09-20T17:55:23.535929527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283912,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T17:55:23.689798683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1/hostname",
	        "HostsPath": "/var/lib/docker/containers/6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1/hosts",
	        "LogPath": "/var/lib/docker/containers/6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1/6831d4cd52379740c1979983bf2509960e33e6db9026aab499f3708dea800ac1-json.log",
	        "Name": "/addons-850577",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-850577:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-850577",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/729f67f266498943b8f444415b740dfdd56cc298487ffc0eb4fad30eaf8e314b-init/diff:/var/lib/docker/overlay2/1053aca897409668adcebf437baa1e9990e7187814591cd6c6cc447a037db101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/729f67f266498943b8f444415b740dfdd56cc298487ffc0eb4fad30eaf8e314b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/729f67f266498943b8f444415b740dfdd56cc298487ffc0eb4fad30eaf8e314b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/729f67f266498943b8f444415b740dfdd56cc298487ffc0eb4fad30eaf8e314b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-850577",
	                "Source": "/var/lib/docker/volumes/addons-850577/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-850577",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-850577",
	                "name.minikube.sigs.k8s.io": "addons-850577",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1dcc65f70edbe4c1f002bba608b1a90acbe5a780d8dc481d550464d5c74d0500",
	            "SandboxKey": "/var/run/docker/netns/1dcc65f70edb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-850577": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5c40dbc4ac9ee3f07f8108bcc1dafbc45dc8569431be99bf26417d20f03fcfe3",
	                    "EndpointID": "4afefbeb0c72823b51c6a2b1c9a3a4be182585ba20287c118c0bf149489dc6dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-850577",
	                        "6831d4cd5237"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-850577 -n addons-850577
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 logs -n 25: (1.255033235s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-997842   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | -p download-only-997842                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| delete  | -p download-only-997842                                                                     | download-only-997842   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| start   | -o=json --download-only                                                                     | download-only-348035   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | -p download-only-348035                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| delete  | -p download-only-348035                                                                     | download-only-348035   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| delete  | -p download-only-997842                                                                     | download-only-997842   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| delete  | -p download-only-348035                                                                     | download-only-348035   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-115951 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | download-docker-115951                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-115951                                                                   | download-docker-115951 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-357659   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | binary-mirror-357659                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43129                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-357659                                                                     | binary-mirror-357659   | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | addons-850577                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | addons-850577                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-850577 --wait=true                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-850577 addons disable                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-850577 addons disable                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:07 UTC | 20 Sep 24 18:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-850577 addons                                                                        | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-850577 addons                                                                        | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	|         | -p addons-850577                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-850577 ssh cat                                                                       | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	|         | /opt/local-path-provisioner/pvc-f345ef47-0969-4cae-a23c-0960456ac5a9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-850577 addons disable                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-850577 ip                                                                            | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	| addons  | addons-850577 addons disable                                                                | addons-850577          | jenkins | v1.34.0 | 20 Sep 24 18:08 UTC | 20 Sep 24 18:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:54:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:54:58.723268  283424 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:54:58.723629  283424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:58.723645  283424 out.go:358] Setting ErrFile to fd 2...
	I0920 17:54:58.723652  283424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:58.723968  283424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 17:54:58.724499  283424 out.go:352] Setting JSON to false
	I0920 17:54:58.725438  283424 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5848,"bootTime":1726849051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 17:54:58.725513  283424 start.go:139] virtualization:  
	I0920 17:54:58.727323  283424 out.go:177] * [addons-850577] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:54:58.728744  283424 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:54:58.728873  283424 notify.go:220] Checking for updates...
	I0920 17:54:58.731152  283424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:54:58.732464  283424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 17:54:58.733712  283424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	I0920 17:54:58.734811  283424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:54:58.735864  283424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:54:58.737280  283424 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:54:58.759469  283424 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:54:58.759613  283424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:58.832410  283424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 17:54:58.822442379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:58.832527  283424 docker.go:318] overlay module found
	I0920 17:54:58.834064  283424 out.go:177] * Using the docker driver based on user configuration
	I0920 17:54:58.835296  283424 start.go:297] selected driver: docker
	I0920 17:54:58.835313  283424 start.go:901] validating driver "docker" against <nil>
	I0920 17:54:58.835326  283424 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:54:58.836052  283424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:58.893549  283424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 17:54:58.88278022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:58.893759  283424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:54:58.894068  283424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:54:58.895495  283424 out.go:177] * Using Docker driver with root privileges
	I0920 17:54:58.896775  283424 cni.go:84] Creating CNI manager for ""
	I0920 17:54:58.896895  283424 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:54:58.896916  283424 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:54:58.897013  283424 start.go:340] cluster config:
	{Name:addons-850577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-850577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:54:58.898571  283424 out.go:177] * Starting "addons-850577" primary control-plane node in "addons-850577" cluster
	I0920 17:54:58.899878  283424 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:54:58.901198  283424 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 17:54:58.902965  283424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:54:58.903052  283424 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 17:54:58.903070  283424 cache.go:56] Caching tarball of preloaded images
	I0920 17:54:58.903049  283424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 17:54:58.903203  283424 preload.go:172] Found /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 17:54:58.903215  283424 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 17:54:58.903650  283424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/config.json ...
	I0920 17:54:58.903697  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/config.json: {Name:mk4073299def46734232b028ed2a0aef8d8bb2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:54:58.924479  283424 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:54:58.924610  283424 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 17:54:58.924629  283424 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 17:54:58.924634  283424 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 17:54:58.924642  283424 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 17:54:58.924647  283424 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 17:55:16.638580  283424 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 17:55:16.638620  283424 cache.go:194] Successfully downloaded all kic artifacts
	I0920 17:55:16.638653  283424 start.go:360] acquireMachinesLock for addons-850577: {Name:mk3e3f602b855e8d1b162612e49c9b4462386ec4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:55:16.638802  283424 start.go:364] duration metric: took 123.468µs to acquireMachinesLock for "addons-850577"
	I0920 17:55:16.638835  283424 start.go:93] Provisioning new machine with config: &{Name:addons-850577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-850577 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:55:16.638914  283424 start.go:125] createHost starting for "" (driver="docker")
	I0920 17:55:16.640684  283424 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 17:55:16.640967  283424 start.go:159] libmachine.API.Create for "addons-850577" (driver="docker")
	I0920 17:55:16.641007  283424 client.go:168] LocalClient.Create starting
	I0920 17:55:16.641142  283424 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem
	I0920 17:55:17.135430  283424 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/cert.pem
	I0920 17:55:17.562068  283424 cli_runner.go:164] Run: docker network inspect addons-850577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 17:55:17.575800  283424 cli_runner.go:211] docker network inspect addons-850577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 17:55:17.575891  283424 network_create.go:284] running [docker network inspect addons-850577] to gather additional debugging logs...
	I0920 17:55:17.575912  283424 cli_runner.go:164] Run: docker network inspect addons-850577
	W0920 17:55:17.591264  283424 cli_runner.go:211] docker network inspect addons-850577 returned with exit code 1
	I0920 17:55:17.591303  283424 network_create.go:287] error running [docker network inspect addons-850577]: docker network inspect addons-850577: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-850577 not found
	I0920 17:55:17.591318  283424 network_create.go:289] output of [docker network inspect addons-850577]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-850577 not found
	
	** /stderr **
	I0920 17:55:17.591443  283424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:55:17.606707  283424 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001897080}
	I0920 17:55:17.606760  283424 network_create.go:124] attempt to create docker network addons-850577 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 17:55:17.606824  283424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-850577 addons-850577
	I0920 17:55:17.677415  283424 network_create.go:108] docker network addons-850577 192.168.49.0/24 created
	I0920 17:55:17.677445  283424 kic.go:121] calculated static IP "192.168.49.2" for the "addons-850577" container
	I0920 17:55:17.677524  283424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 17:55:17.690965  283424 cli_runner.go:164] Run: docker volume create addons-850577 --label name.minikube.sigs.k8s.io=addons-850577 --label created_by.minikube.sigs.k8s.io=true
	I0920 17:55:17.706912  283424 oci.go:103] Successfully created a docker volume addons-850577
	I0920 17:55:17.707006  283424 cli_runner.go:164] Run: docker run --rm --name addons-850577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850577 --entrypoint /usr/bin/test -v addons-850577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 17:55:19.750457  283424 cli_runner.go:217] Completed: docker run --rm --name addons-850577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850577 --entrypoint /usr/bin/test -v addons-850577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.043405251s)
	I0920 17:55:19.750488  283424 oci.go:107] Successfully prepared a docker volume addons-850577
	I0920 17:55:19.750505  283424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:55:19.750526  283424 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 17:55:19.750593  283424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-850577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 17:55:23.470695  283424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-850577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.720059067s)
	I0920 17:55:23.470725  283424 kic.go:203] duration metric: took 3.720196788s to extract preloaded images to volume ...
	W0920 17:55:23.470870  283424 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 17:55:23.470975  283424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 17:55:23.521380  283424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-850577 --name addons-850577 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-850577 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-850577 --network addons-850577 --ip 192.168.49.2 --volume addons-850577:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 17:55:23.856455  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Running}}
	I0920 17:55:23.877678  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:23.900786  283424 cli_runner.go:164] Run: docker exec addons-850577 stat /var/lib/dpkg/alternatives/iptables
	I0920 17:55:23.964677  283424 oci.go:144] the created container "addons-850577" has a running status.
	I0920 17:55:23.964736  283424 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa...
	I0920 17:55:25.195417  283424 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 17:55:25.217510  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:25.235713  283424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 17:55:25.235736  283424 kic_runner.go:114] Args: [docker exec --privileged addons-850577 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 17:55:25.293288  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:25.312253  283424 machine.go:93] provisionDockerMachine start ...
	I0920 17:55:25.312364  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:25.330096  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:25.330374  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:25.330391  283424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:55:25.480270  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-850577
	
	I0920 17:55:25.480295  283424 ubuntu.go:169] provisioning hostname "addons-850577"
	I0920 17:55:25.480360  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:25.497811  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:25.498086  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:25.498104  283424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-850577 && echo "addons-850577" | sudo tee /etc/hostname
	I0920 17:55:25.657449  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-850577
	
	I0920 17:55:25.657537  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:25.677195  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:25.677533  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:25.677558  283424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-850577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-850577/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-850577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:55:25.825170  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:55:25.825204  283424 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-277267/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-277267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-277267/.minikube}
	I0920 17:55:25.825234  283424 ubuntu.go:177] setting up certificates
	I0920 17:55:25.825244  283424 provision.go:84] configureAuth start
	I0920 17:55:25.825317  283424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850577
	I0920 17:55:25.847886  283424 provision.go:143] copyHostCerts
	I0920 17:55:25.848134  283424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-277267/.minikube/cert.pem (1123 bytes)
	I0920 17:55:25.848320  283424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-277267/.minikube/key.pem (1675 bytes)
	I0920 17:55:25.848572  283424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-277267/.minikube/ca.pem (1082 bytes)
	I0920 17:55:25.848669  283424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-277267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca-key.pem org=jenkins.addons-850577 san=[127.0.0.1 192.168.49.2 addons-850577 localhost minikube]
	I0920 17:55:26.306971  283424 provision.go:177] copyRemoteCerts
	I0920 17:55:26.307047  283424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:55:26.307090  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:26.328103  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:26.433669  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:55:26.460527  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:55:26.485915  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:55:26.510584  283424 provision.go:87] duration metric: took 685.321652ms to configureAuth
	I0920 17:55:26.510613  283424 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:55:26.510835  283424 config.go:182] Loaded profile config "addons-850577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:55:26.510896  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:26.527583  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:26.527828  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:26.527846  283424 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:55:26.673314  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 17:55:26.673336  283424 ubuntu.go:71] root file system type: overlay
	I0920 17:55:26.673450  283424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:55:26.673522  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:26.690999  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:26.691248  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:26.691332  283424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:55:26.848803  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:55:26.848901  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:26.866715  283424 main.go:141] libmachine: Using SSH client type: native
	I0920 17:55:26.866961  283424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0920 17:55:26.866984  283424 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:55:27.710613  283424 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 17:55:26.842727010 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 17:55:27.710661  283424 machine.go:96] duration metric: took 2.398378608s to provisionDockerMachine
	I0920 17:55:27.710674  283424 client.go:171] duration metric: took 11.069657045s to LocalClient.Create
	I0920 17:55:27.710693  283424 start.go:167] duration metric: took 11.069726903s to libmachine.API.Create "addons-850577"
	I0920 17:55:27.710706  283424 start.go:293] postStartSetup for "addons-850577" (driver="docker")
	I0920 17:55:27.710718  283424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:55:27.710793  283424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:55:27.710850  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:27.732075  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:27.834183  283424 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:55:27.837786  283424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:55:27.837829  283424 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:55:27.837844  283424 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:55:27.837852  283424 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:55:27.837867  283424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-277267/.minikube/addons for local assets ...
	I0920 17:55:27.837944  283424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-277267/.minikube/files for local assets ...
	I0920 17:55:27.837971  283424 start.go:296] duration metric: took 127.258595ms for postStartSetup
	I0920 17:55:27.838316  283424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850577
	I0920 17:55:27.857310  283424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/config.json ...
	I0920 17:55:27.857686  283424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:55:27.857746  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:27.874224  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:27.973732  283424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:55:27.978336  283424 start.go:128] duration metric: took 11.339404514s to createHost
	I0920 17:55:27.978366  283424 start.go:83] releasing machines lock for "addons-850577", held for 11.33954943s
	I0920 17:55:27.978448  283424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-850577
	I0920 17:55:27.995528  283424 ssh_runner.go:195] Run: cat /version.json
	I0920 17:55:27.995586  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:27.995839  283424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:55:27.995908  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:28.019116  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:28.035722  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:28.125295  283424 ssh_runner.go:195] Run: systemctl --version
	I0920 17:55:28.263424  283424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:55:28.268032  283424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:55:28.294078  283424 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:55:28.294163  283424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:55:28.325010  283424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 17:55:28.325043  283424 start.go:495] detecting cgroup driver to use...
	I0920 17:55:28.325078  283424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:55:28.325191  283424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:55:28.341925  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:55:28.351993  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:55:28.362058  283424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:55:28.362198  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:55:28.372257  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:55:28.382929  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:55:28.393548  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:55:28.403726  283424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:55:28.413421  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:55:28.423194  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:55:28.432866  283424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:55:28.442902  283424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:55:28.451597  283424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:55:28.460219  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:28.583359  283424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:55:28.698215  283424 start.go:495] detecting cgroup driver to use...
	I0920 17:55:28.698328  283424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:55:28.698420  283424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:55:28.720453  283424 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 17:55:28.720582  283424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:55:28.732836  283424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:55:28.749667  283424 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:55:28.753639  283424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:55:28.762678  283424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:55:28.786361  283424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:55:28.896207  283424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:55:28.998600  283424 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:55:28.998744  283424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:55:29.020841  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:29.119701  283424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:55:29.396995  283424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:55:29.410098  283424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:55:29.422737  283424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:55:29.507156  283424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 17:55:29.591807  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:29.679043  283424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 17:55:29.695443  283424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:55:29.707985  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:29.803108  283424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 17:55:29.877305  283424 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:55:29.877464  283424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 17:55:29.881559  283424 start.go:563] Will wait 60s for crictl version
	I0920 17:55:29.881672  283424 ssh_runner.go:195] Run: which crictl
	I0920 17:55:29.885811  283424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:55:29.925407  283424 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0920 17:55:29.925555  283424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:55:29.957360  283424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:55:29.985364  283424 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0920 17:55:29.985556  283424 cli_runner.go:164] Run: docker network inspect addons-850577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:55:30.001772  283424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 17:55:30.019549  283424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:55:30.054883  283424 kubeadm.go:883] updating cluster {Name:addons-850577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-850577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:55:30.055020  283424 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:55:30.055093  283424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:55:30.095559  283424 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:55:30.095907  283424 docker.go:615] Images already preloaded, skipping extraction
	I0920 17:55:30.096073  283424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:55:30.121040  283424 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:55:30.121067  283424 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:55:30.121085  283424 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 17:55:30.121221  283424 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-850577 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-850577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:55:30.121313  283424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 17:55:30.185211  283424 cni.go:84] Creating CNI manager for ""
	I0920 17:55:30.185306  283424 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:55:30.185332  283424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:55:30.185386  283424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-850577 NodeName:addons-850577 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:55:30.185604  283424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-850577"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:55:30.185758  283424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:55:30.200010  283424 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:55:30.200096  283424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:55:30.213078  283424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 17:55:30.244663  283424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:55:30.267619  283424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 17:55:30.288458  283424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 17:55:30.292579  283424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:55:30.306197  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:30.390711  283424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:55:30.406854  283424 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577 for IP: 192.168.49.2
	I0920 17:55:30.406876  283424 certs.go:194] generating shared ca certs ...
	I0920 17:55:30.406893  283424 certs.go:226] acquiring lock for ca certs: {Name:mka5e3b330be08f0c494d71f47d44c93b5adf1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:30.407101  283424 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-277267/.minikube/ca.key
	I0920 17:55:30.680821  283424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-277267/.minikube/ca.crt ...
	I0920 17:55:30.680858  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/ca.crt: {Name:mk232667ea8e1ae46cd0055bb27f1cd24bdf1fc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:30.681058  283424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-277267/.minikube/ca.key ...
	I0920 17:55:30.681074  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/ca.key: {Name:mk245add7be79e96b8506c28e4b25c83a32dac0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:30.681168  283424 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.key
	I0920 17:55:30.942534  283424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.crt ...
	I0920 17:55:30.942563  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.crt: {Name:mk6aada32ca6adf9622cd81082857e6ee6e1851b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:30.942754  283424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.key ...
	I0920 17:55:30.942767  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.key: {Name:mk11084c8c75bbe9fbcec781aa9e1ad698974469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:30.942849  283424 certs.go:256] generating profile certs ...
	I0920 17:55:30.942914  283424 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.key
	I0920 17:55:30.942944  283424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt with IP's: []
	I0920 17:55:31.324932  283424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt ...
	I0920 17:55:31.324972  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: {Name:mk919ac99d4cc46156ef76a6cf7124a498f3cc28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:31.325172  283424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.key ...
	I0920 17:55:31.325189  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.key: {Name:mk650ec7e006168a21faf50e2badbab9bdc8c6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:31.325319  283424 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key.8dc847fb
	I0920 17:55:31.325342  283424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt.8dc847fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 17:55:31.685192  283424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt.8dc847fb ...
	I0920 17:55:31.685225  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt.8dc847fb: {Name:mk0eb5cd2f1dec3407223c93632a91891f3f1332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:31.685418  283424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key.8dc847fb ...
	I0920 17:55:31.685433  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key.8dc847fb: {Name:mk7a771ec10854e067e2c1ddf868e6d54efd8246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:31.685527  283424 certs.go:381] copying /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt.8dc847fb -> /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt
	I0920 17:55:31.685611  283424 certs.go:385] copying /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key.8dc847fb -> /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key
	I0920 17:55:31.685670  283424 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.key
	I0920 17:55:31.685690  283424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.crt with IP's: []
	I0920 17:55:32.108328  283424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.crt ...
	I0920 17:55:32.108362  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.crt: {Name:mkec3c1f231202e321c9ffbbf4afe74b84f01d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:32.108542  283424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.key ...
	I0920 17:55:32.108556  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.key: {Name:mk47071cc3a7e1e13296cb51c31b067ecc341f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:32.108801  283424 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 17:55:32.108844  283424 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:55:32.108875  283424 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:55:32.108904  283424 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-277267/.minikube/certs/key.pem (1675 bytes)
	I0920 17:55:32.109527  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:55:32.134215  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 17:55:32.160669  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:55:32.185522  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:55:32.210814  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:55:32.234718  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:55:32.260345  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:55:32.284779  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:55:32.309684  283424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-277267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:55:32.334262  283424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:55:32.352453  283424 ssh_runner.go:195] Run: openssl version
	I0920 17:55:32.358543  283424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:55:32.368306  283424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:55:32.371809  283424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:55 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:55:32.371880  283424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:55:32.378847  283424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:55:32.388732  283424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:55:32.392104  283424 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:55:32.392194  283424 kubeadm.go:392] StartCluster: {Name:addons-850577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-850577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:55:32.392323  283424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:55:32.409243  283424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:55:32.417905  283424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:55:32.428009  283424 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 17:55:32.428084  283424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:55:32.437352  283424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:55:32.437374  283424 kubeadm.go:157] found existing configuration files:
	
	I0920 17:55:32.437428  283424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:55:32.446277  283424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:55:32.446344  283424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:55:32.454921  283424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:55:32.463702  283424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:55:32.463769  283424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:55:32.473212  283424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:55:32.482098  283424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:55:32.482220  283424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:55:32.491347  283424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:55:32.500657  283424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:55:32.500777  283424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:55:32.509132  283424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 17:55:32.554897  283424 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:55:32.555244  283424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:55:32.576355  283424 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 17:55:32.576441  283424 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 17:55:32.576481  283424 kubeadm.go:310] OS: Linux
	I0920 17:55:32.576540  283424 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 17:55:32.576600  283424 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 17:55:32.576658  283424 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 17:55:32.576745  283424 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 17:55:32.576806  283424 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 17:55:32.576868  283424 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 17:55:32.576919  283424 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 17:55:32.577000  283424 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 17:55:32.577063  283424 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 17:55:32.655407  283424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:55:32.655617  283424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:55:32.655798  283424 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:55:32.675479  283424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:55:32.681559  283424 out.go:235]   - Generating certificates and keys ...
	I0920 17:55:32.681673  283424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:55:32.681747  283424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:55:32.918080  283424 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:55:33.306698  283424 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:55:33.461219  283424 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:55:34.344336  283424 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:55:34.701009  283424 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:55:34.701322  283424 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-850577 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:55:35.122233  283424 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:55:35.122567  283424 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-850577 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:55:35.889254  283424 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:55:36.490090  283424 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:55:36.751123  283424 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:55:36.751348  283424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:55:37.380474  283424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:55:38.003291  283424 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:55:38.901878  283424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:55:39.893023  283424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:55:40.449455  283424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:55:40.450081  283424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:55:40.453007  283424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:55:40.456091  283424 out.go:235]   - Booting up control plane ...
	I0920 17:55:40.456191  283424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:55:40.456265  283424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:55:40.456332  283424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:55:40.467107  283424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:55:40.472950  283424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:55:40.473309  283424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:55:40.571454  283424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:55:40.571580  283424 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:55:42.572914  283424 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001796817s
	I0920 17:55:42.573000  283424 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:55:49.574299  283424 kubeadm.go:310] [api-check] The API server is healthy after 7.001392439s
	I0920 17:55:49.597391  283424 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:55:49.612875  283424 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:55:49.644069  283424 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:55:49.644532  283424 kubeadm.go:310] [mark-control-plane] Marking the node addons-850577 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:55:49.660272  283424 kubeadm.go:310] [bootstrap-token] Using token: gaa441.o297y6sembn5rifg
	I0920 17:55:49.662908  283424 out.go:235]   - Configuring RBAC rules ...
	I0920 17:55:49.663053  283424 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:55:49.668367  283424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:55:49.681235  283424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:55:49.685562  283424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:55:49.691694  283424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:55:49.695955  283424 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:55:49.984601  283424 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:55:50.432940  283424 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:55:50.981798  283424 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:55:50.983249  283424 kubeadm.go:310] 
	I0920 17:55:50.983352  283424 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:55:50.983395  283424 kubeadm.go:310] 
	I0920 17:55:50.983515  283424 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:55:50.983527  283424 kubeadm.go:310] 
	I0920 17:55:50.983565  283424 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:55:50.983636  283424 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:55:50.983698  283424 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:55:50.983708  283424 kubeadm.go:310] 
	I0920 17:55:50.983789  283424 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:55:50.983808  283424 kubeadm.go:310] 
	I0920 17:55:50.983864  283424 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:55:50.983870  283424 kubeadm.go:310] 
	I0920 17:55:50.983933  283424 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:55:50.984022  283424 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:55:50.984123  283424 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:55:50.984138  283424 kubeadm.go:310] 
	I0920 17:55:50.984236  283424 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:55:50.984333  283424 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:55:50.984343  283424 kubeadm.go:310] 
	I0920 17:55:50.984441  283424 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gaa441.o297y6sembn5rifg \
	I0920 17:55:50.984550  283424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dda7c38952bdad399beeb9e26a12c373a083486065b66df016dc0978329ffab7 \
	I0920 17:55:50.984581  283424 kubeadm.go:310] 	--control-plane 
	I0920 17:55:50.984597  283424 kubeadm.go:310] 
	I0920 17:55:50.984723  283424 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:55:50.984732  283424 kubeadm.go:310] 
	I0920 17:55:50.984815  283424 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gaa441.o297y6sembn5rifg \
	I0920 17:55:50.984924  283424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dda7c38952bdad399beeb9e26a12c373a083486065b66df016dc0978329ffab7 
	I0920 17:55:50.988035  283424 kubeadm.go:310] W0920 17:55:32.550985    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:55:50.988341  283424 kubeadm.go:310] W0920 17:55:32.552189    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:55:50.988562  283424 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 17:55:50.988730  283424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:55:50.988754  283424 cni.go:84] Creating CNI manager for ""
	I0920 17:55:50.988770  283424 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:55:50.991814  283424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:55:50.994640  283424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:55:51.003466  283424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:55:51.027080  283424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:55:51.027225  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:51.027312  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-850577 minikube.k8s.io/updated_at=2024_09_20T17_55_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-850577 minikube.k8s.io/primary=true
	I0920 17:55:51.039840  283424 ops.go:34] apiserver oom_adj: -16
	I0920 17:55:51.166407  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:51.667334  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:52.167156  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:52.666789  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:53.166462  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:53.667265  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:54.167126  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:54.666591  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:55.167367  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:55.666506  283424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:55:55.812270  283424 kubeadm.go:1113] duration metric: took 4.785096317s to wait for elevateKubeSystemPrivileges
	I0920 17:55:55.812299  283424 kubeadm.go:394] duration metric: took 23.420109906s to StartCluster
	I0920 17:55:55.812317  283424 settings.go:142] acquiring lock: {Name:mk09b32ff96b7671dba5302a8b6c59455cc8c936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:55.812449  283424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 17:55:55.812889  283424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-277267/kubeconfig: {Name:mk5af2d4931ad8cfafe47962fae060ea6c484f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:55:55.813108  283424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:55:55.813256  283424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:55:55.813516  283424 config.go:182] Loaded profile config "addons-850577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:55:55.813547  283424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:55:55.813638  283424 addons.go:69] Setting yakd=true in profile "addons-850577"
	I0920 17:55:55.813652  283424 addons.go:234] Setting addon yakd=true in "addons-850577"
	I0920 17:55:55.813677  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.814204  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.814724  283424 addons.go:69] Setting cloud-spanner=true in profile "addons-850577"
	I0920 17:55:55.814751  283424 addons.go:234] Setting addon cloud-spanner=true in "addons-850577"
	I0920 17:55:55.814753  283424 addons.go:69] Setting metrics-server=true in profile "addons-850577"
	I0920 17:55:55.814769  283424 addons.go:234] Setting addon metrics-server=true in "addons-850577"
	I0920 17:55:55.814792  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.814799  283424 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-850577"
	I0920 17:55:55.814828  283424 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-850577"
	I0920 17:55:55.814845  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.815247  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.815279  283424 addons.go:69] Setting volumesnapshots=true in profile "addons-850577"
	I0920 17:55:55.815290  283424 addons.go:234] Setting addon volumesnapshots=true in "addons-850577"
	I0920 17:55:55.815307  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.815678  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.815252  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.826407  283424 addons.go:69] Setting default-storageclass=true in profile "addons-850577"
	I0920 17:55:55.826441  283424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-850577"
	I0920 17:55:55.826573  283424 out.go:177] * Verifying Kubernetes components...
	I0920 17:55:55.826766  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.837823  283424 addons.go:69] Setting gcp-auth=true in profile "addons-850577"
	I0920 17:55:55.837861  283424 mustload.go:65] Loading cluster: addons-850577
	I0920 17:55:55.838079  283424 config.go:182] Loaded profile config "addons-850577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:55:55.838342  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.841020  283424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:55:55.856659  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:55:55.860183  283424 addons.go:69] Setting ingress=true in profile "addons-850577"
	I0920 17:55:55.860235  283424 addons.go:234] Setting addon ingress=true in "addons-850577"
	I0920 17:55:55.860313  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.861193  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.866658  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:55:55.878310  283424 addons.go:69] Setting ingress-dns=true in profile "addons-850577"
	I0920 17:55:55.878345  283424 addons.go:234] Setting addon ingress-dns=true in "addons-850577"
	I0920 17:55:55.878390  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.878882  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.896682  283424 addons.go:69] Setting inspektor-gadget=true in profile "addons-850577"
	I0920 17:55:55.896798  283424 addons.go:234] Setting addon inspektor-gadget=true in "addons-850577"
	I0920 17:55:55.896876  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.906682  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:55:55.909776  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:55:55.814793  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.913009  283424 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:55:55.917125  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:55:55.917215  283424 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:55:55.917225  283424 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:55:55.917296  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:55.918186  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.932794  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.815258  283424 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-850577"
	I0920 17:55:55.933066  283424 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-850577"
	I0920 17:55:55.933126  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.933597  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.956843  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:55:55.958176  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:55:55.962129  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:55:55.962356  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:55:55.962422  283424 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:55:55.962542  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:55.815269  283424 addons.go:69] Setting storage-provisioner=true in profile "addons-850577"
	I0920 17:55:55.815272  283424 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-850577"
	I0920 17:55:55.972413  283424 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-850577"
	I0920 17:55:55.815275  283424 addons.go:69] Setting volcano=true in profile "addons-850577"
	I0920 17:55:55.815263  283424 addons.go:69] Setting registry=true in profile "addons-850577"
	I0920 17:55:55.972566  283424 addons.go:234] Setting addon registry=true in "addons-850577"
	I0920 17:55:55.975229  283424 addons.go:234] Setting addon storage-provisioner=true in "addons-850577"
	I0920 17:55:55.975296  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.975971  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.979310  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.987411  283424 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 17:55:55.987653  283424 addons.go:234] Setting addon volcano=true in "addons-850577"
	I0920 17:55:55.987747  283424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:55:55.987777  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.987760  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:55.988846  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:55.989338  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:56.001134  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:56.001353  283424 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:55:56.036762  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:55:56.036859  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.036539  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.001362  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:55:56.052714  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:55:56.052827  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.002972  283424 addons.go:234] Setting addon default-storageclass=true in "addons-850577"
	I0920 17:55:56.065239  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:56.065801  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:56.073232  283424 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:55:56.073498  283424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:55:56.085078  283424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:55:56.085339  283424 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:55:56.085358  283424 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:55:56.085463  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.107168  283424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:55:56.110242  283424 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:55:56.110271  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:55:56.110342  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.147385  283424 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:55:56.150514  283424 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:55:56.150547  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:55:56.150615  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.211273  283424 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:55:56.214082  283424 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:55:56.214107  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:55:56.214180  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.235854  283424 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:55:56.240078  283424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:55:56.240143  283424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:55:56.240390  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.273373  283424 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-850577"
	I0920 17:55:56.273415  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:55:56.273853  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:55:56.293255  283424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:55:56.293389  283424 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:55:56.294100  283424 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:55:56.296120  283424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:55:56.296144  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:55:56.296229  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.297746  283424 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:55:56.300324  283424 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:55:56.303728  283424 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:55:56.303770  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:55:56.303918  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.323504  283424 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:55:56.325244  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.330353  283424 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:55:56.330383  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:55:56.330458  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.386726  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.388141  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.391440  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.408746  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.409552  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.415237  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.420824  283424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:55:56.420847  283424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:55:56.420913  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.473074  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.492780  283424 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:55:56.498805  283424 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:55:56.504282  283424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:55:56.504313  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:55:56.504403  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:55:56.508638  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.517058  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.525919  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.530517  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:55:56.558539  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	W0920 17:55:56.559910  283424 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 17:55:56.559938  283424 retry.go:31] will retry after 373.163707ms: ssh: handshake failed: EOF
	I0920 17:55:56.940830  283424 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.099726559s)
	I0920 17:55:56.940948  283424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:55:56.941122  283424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.127845258s)
	I0920 17:55:56.941287  283424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:55:56.959398  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:55:56.992089  283424 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:55:56.992172  283424 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:55:57.203822  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:55:57.222139  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:55:57.336016  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:55:57.477823  283424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:55:57.477896  283424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:55:57.531344  283424 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:55:57.531446  283424 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:55:57.582226  283424 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:55:57.582319  283424 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:55:57.593198  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:55:57.608190  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:55:57.608274  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:55:57.749630  283424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:55:57.749728  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:55:57.759026  283424 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:55:57.759127  283424 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:55:57.788562  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:55:57.872168  283424 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:55:57.872247  283424 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:55:57.883324  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:55:57.888631  283424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:55:57.888735  283424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:55:57.914421  283424 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:55:57.914456  283424 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:55:57.944389  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:55:57.983658  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:55:57.983692  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:55:58.040837  283424 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:55:58.040864  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:55:58.090461  283424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:55:58.090491  283424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:55:58.127602  283424 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:55:58.127628  283424 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:55:58.157868  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:55:58.242956  283424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:55:58.242987  283424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:55:58.318756  283424 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:55:58.318783  283424 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:55:58.337836  283424 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:55:58.337861  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:55:58.344488  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:55:58.344514  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:55:58.463752  283424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:55:58.463780  283424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:55:58.479395  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:55:58.479421  283424 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:55:58.517236  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:55:58.576370  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:55:58.576395  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:55:58.580184  283424 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:55:58.580210  283424 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:55:58.647858  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:55:58.742224  283424 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:55:58.742293  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:55:58.776006  283424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:55:58.776031  283424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:55:58.895855  283424 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:55:58.895880  283424 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:55:58.999484  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:55:59.073319  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:55:59.073343  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:55:59.214768  283424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.273440122s)
	I0920 17:55:59.214794  283424 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 17:55:59.215900  283424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.274930545s)
	I0920 17:55:59.216643  283424 node_ready.go:35] waiting up to 6m0s for node "addons-850577" to be "Ready" ...
	I0920 17:55:59.221277  283424 node_ready.go:49] node "addons-850577" has status "Ready":"True"
	I0920 17:55:59.221305  283424 node_ready.go:38] duration metric: took 4.635317ms for node "addons-850577" to be "Ready" ...
	I0920 17:55:59.221315  283424 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:55:59.234497  283424 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:55:59.234523  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:55:59.240075  283424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace to be "Ready" ...
	I0920 17:55:59.522658  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:55:59.522687  283424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:55:59.588333  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:55:59.733892  283424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-850577" context rescaled to 1 replicas
	I0920 17:55:59.984278  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:55:59.984372  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:56:00.214208  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:56:00.214297  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:56:00.978623  283424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:56:00.978767  283424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:56:01.253705  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:01.816668  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:56:01.919550  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.960059634s)
	I0920 17:56:02.104113  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.900203791s)
	I0920 17:56:02.104235  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.881947673s)
	I0920 17:56:02.104312  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.768268714s)
	I0920 17:56:03.038702  283424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:56:03.038793  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:56:03.074156  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:56:03.747008  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:04.398322  283424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:56:04.629794  283424 addons.go:234] Setting addon gcp-auth=true in "addons-850577"
	I0920 17:56:04.629853  283424 host.go:66] Checking if "addons-850577" exists ...
	I0920 17:56:04.630432  283424 cli_runner.go:164] Run: docker container inspect addons-850577 --format={{.State.Status}}
	I0920 17:56:04.665261  283424 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:56:04.665323  283424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-850577
	I0920 17:56:04.702968  283424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/addons-850577/id_rsa Username:docker}
	I0920 17:56:06.246610  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:07.434612  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.841311657s)
	I0920 17:56:07.434645  283424 addons.go:475] Verifying addon ingress=true in "addons-850577"
	I0920 17:56:07.434841  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.646252144s)
	I0920 17:56:07.440324  283424 out.go:177] * Verifying ingress addon...
	I0920 17:56:07.443702  283424 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:56:07.448927  283424 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:56:07.448956  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:07.995705  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:08.274085  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:08.474627  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:08.959769  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:09.219975  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.062077204s)
	I0920 17:56:09.220009  283424 addons.go:475] Verifying addon registry=true in "addons-850577"
	I0920 17:56:09.220114  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.275493266s)
	I0920 17:56:09.220441  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.703173904s)
	I0920 17:56:09.220530  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.572643695s)
	I0920 17:56:09.220545  283424 addons.go:475] Verifying addon metrics-server=true in "addons-850577"
	I0920 17:56:09.220624  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.221111254s)
	W0920 17:56:09.220646  283424 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:56:09.220664  283424 retry.go:31] will retry after 368.784236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:56:09.220769  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.632397263s)
	I0920 17:56:09.220900  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.337501334s)
	I0920 17:56:09.222938  283424 out.go:177] * Verifying registry addon...
	I0920 17:56:09.224666  283424 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-850577 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:56:09.227520  283424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:56:09.262971  283424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:56:09.262995  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:09.468371  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:09.590449  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:56:09.788859  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:09.952509  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:10.235618  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:10.276932  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:10.456584  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:10.480251  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.66350606s)
	I0920 17:56:10.480288  283424 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-850577"
	I0920 17:56:10.480413  283424 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.815127127s)
	I0920 17:56:10.485073  283424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:56:10.485146  283424 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:56:10.488317  283424 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:56:10.489372  283424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:56:10.491662  283424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:56:10.491690  283424 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:56:10.557976  283424 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:56:10.558009  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:10.581505  283424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:56:10.581529  283424 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:56:10.642302  283424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:56:10.642333  283424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:56:10.731478  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:10.766566  283424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:56:10.949640  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:10.998099  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:11.231797  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:11.449250  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:11.495283  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:11.732183  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:11.949475  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:11.994415  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:12.072258  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.481758839s)
	I0920 17:56:12.237138  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:12.341390  283424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.574778489s)
	I0920 17:56:12.344684  283424 addons.go:475] Verifying addon gcp-auth=true in "addons-850577"
	I0920 17:56:12.349350  283424 out.go:177] * Verifying gcp-auth addon...
	I0920 17:56:12.353106  283424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:56:12.360346  283424 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:56:12.461065  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:12.561824  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:12.732898  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:12.747730  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:12.948196  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:12.996514  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:13.231445  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:13.449079  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:13.494925  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:13.733269  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:13.948844  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:13.994957  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:14.232130  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:14.448971  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:14.495193  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:14.732157  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:14.949401  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:14.994952  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:15.232630  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:15.247409  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:15.447822  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:15.493955  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:15.731373  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:15.959107  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:15.994680  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:16.231621  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:16.448744  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:16.494669  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:16.731790  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:16.949607  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:16.995056  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:17.231889  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:17.448484  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:17.494836  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:17.732652  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:17.748859  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:17.948385  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:17.995278  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:18.231918  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:18.448653  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:18.494610  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:18.733063  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:18.948552  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:18.994375  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:19.233141  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:19.460393  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:19.563034  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:19.731826  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:19.948829  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:19.996587  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:20.232038  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:20.247172  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:20.448602  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:20.494183  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:20.731261  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:20.949482  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:20.995692  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:21.232073  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:21.450192  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:21.498852  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:21.732788  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:21.948759  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:21.995845  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:22.232190  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:22.449100  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:22.494475  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:22.732340  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:22.747173  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:22.947884  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:22.993975  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:23.232296  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:23.447954  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:23.494534  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:23.731764  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:23.948234  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:23.995370  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:24.231070  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:24.448652  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:24.494174  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:24.745192  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:24.749102  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:24.948490  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:24.994845  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:25.232508  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:25.448604  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:25.494120  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:25.731939  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:56:25.948935  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:25.994586  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:26.231835  283424 kapi.go:107] duration metric: took 17.004312127s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:56:26.448210  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:26.494976  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:26.751726  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:26.948585  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:26.995012  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:27.459629  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:27.493979  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:27.948352  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:27.997355  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:28.448720  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:28.494644  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:28.948298  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:28.993647  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:29.247812  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:29.448675  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:29.495570  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:29.948121  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:29.994727  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:30.449667  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:30.495324  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:30.948757  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:30.995239  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:31.448594  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:31.494483  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:31.748163  283424 pod_ready.go:103] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"False"
	I0920 17:56:31.948850  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:31.995747  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:32.448627  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:32.494199  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:32.746939  283424 pod_ready.go:93] pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:32.746967  283424 pod_ready.go:82] duration metric: took 33.506854076s for pod "coredns-7c65d6cfc9-7h69s" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.746980  283424 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gpttq" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.749366  283424 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-gpttq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gpttq" not found
	I0920 17:56:32.749439  283424 pod_ready.go:82] duration metric: took 2.449722ms for pod "coredns-7c65d6cfc9-gpttq" in "kube-system" namespace to be "Ready" ...
	E0920 17:56:32.749465  283424 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-gpttq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gpttq" not found
	I0920 17:56:32.749488  283424 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.755578  283424 pod_ready.go:93] pod "etcd-addons-850577" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:32.755652  283424 pod_ready.go:82] duration metric: took 6.123729ms for pod "etcd-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.755680  283424 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.762070  283424 pod_ready.go:93] pod "kube-apiserver-addons-850577" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:32.762166  283424 pod_ready.go:82] duration metric: took 6.464021ms for pod "kube-apiserver-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.762193  283424 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.769340  283424 pod_ready.go:93] pod "kube-controller-manager-addons-850577" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:32.769418  283424 pod_ready.go:82] duration metric: took 7.202139ms for pod "kube-controller-manager-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.769446  283424 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gpmhn" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.945465  283424 pod_ready.go:93] pod "kube-proxy-gpmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:32.945540  283424 pod_ready.go:82] duration metric: took 176.073443ms for pod "kube-proxy-gpmhn" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.945567  283424 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:32.948543  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:32.994758  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:33.345997  283424 pod_ready.go:93] pod "kube-scheduler-addons-850577" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:33.346028  283424 pod_ready.go:82] duration metric: took 400.43927ms for pod "kube-scheduler-addons-850577" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:33.346041  283424 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6zxks" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:33.459031  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:33.560616  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:33.746372  283424 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6zxks" in "kube-system" namespace has status "Ready":"True"
	I0920 17:56:33.746400  283424 pod_ready.go:82] duration metric: took 400.349204ms for pod "nvidia-device-plugin-daemonset-6zxks" in "kube-system" namespace to be "Ready" ...
	I0920 17:56:33.746411  283424 pod_ready.go:39] duration metric: took 34.52508419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:56:33.746431  283424 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:56:33.746500  283424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:56:33.776057  283424 api_server.go:72] duration metric: took 37.962919733s to wait for apiserver process to appear ...
	I0920 17:56:33.776085  283424 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:56:33.776107  283424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 17:56:33.785725  283424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 17:56:33.786832  283424 api_server.go:141] control plane version: v1.31.1
	I0920 17:56:33.786858  283424 api_server.go:131] duration metric: took 10.766011ms to wait for apiserver health ...
	I0920 17:56:33.786868  283424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:56:33.951236  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:33.958653  283424 system_pods.go:59] 17 kube-system pods found
	I0920 17:56:33.958752  283424 system_pods.go:61] "coredns-7c65d6cfc9-7h69s" [9a558d7a-0902-4311-9e03-37b7e777b7ea] Running
	I0920 17:56:33.958778  283424 system_pods.go:61] "csi-hostpath-attacher-0" [1e6a1806-5a0e-4d9b-8226-6dd15458d077] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:56:33.958825  283424 system_pods.go:61] "csi-hostpath-resizer-0" [dea5489d-d945-4d45-ac46-bd6ec45c5d38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:56:33.958854  283424 system_pods.go:61] "csi-hostpathplugin-jtnb9" [4845c672-f105-4f74-a139-938c1552c61b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:56:33.958876  283424 system_pods.go:61] "etcd-addons-850577" [c828c556-c896-4188-9dea-49da081f4960] Running
	I0920 17:56:33.958909  283424 system_pods.go:61] "kube-apiserver-addons-850577" [46334579-4abf-478f-b45e-d12db5d4d6a6] Running
	I0920 17:56:33.958933  283424 system_pods.go:61] "kube-controller-manager-addons-850577" [4210bece-d382-4405-a4c0-4b72505b3388] Running
	I0920 17:56:33.958956  283424 system_pods.go:61] "kube-ingress-dns-minikube" [37bf8c14-c53d-4a81-b9d6-510e6390c934] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 17:56:33.958993  283424 system_pods.go:61] "kube-proxy-gpmhn" [f98ded18-6250-44d9-b128-ba3c6c921056] Running
	I0920 17:56:33.959023  283424 system_pods.go:61] "kube-scheduler-addons-850577" [992349d6-fee6-4700-a741-26e39b687ff3] Running
	I0920 17:56:33.959046  283424 system_pods.go:61] "metrics-server-84c5f94fbc-lzgjn" [a4dc3384-43e1-4ebe-8492-5adb0ce969d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:56:33.959081  283424 system_pods.go:61] "nvidia-device-plugin-daemonset-6zxks" [ac4e3a6e-15b5-407e-8aea-d28feff68e17] Running
	I0920 17:56:33.959105  283424 system_pods.go:61] "registry-66c9cd494c-8c8sz" [48080209-95a2-4f92-83d3-4a339a6b1b54] Running
	I0920 17:56:33.959126  283424 system_pods.go:61] "registry-proxy-fvmkn" [34be177b-c148-4a04-9275-afdde27c3678] Running
	I0920 17:56:33.959163  283424 system_pods.go:61] "snapshot-controller-56fcc65765-nmgdg" [553ea066-259f-4267-8358-c92e60831dd3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:56:33.959189  283424 system_pods.go:61] "snapshot-controller-56fcc65765-rct4g" [90ebe3ee-8d90-4610-9cc5-c31d34a9723e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:56:33.959219  283424 system_pods.go:61] "storage-provisioner" [c54b2527-1026-42bb-8514-949f115e15bf] Running
	I0920 17:56:33.959252  283424 system_pods.go:74] duration metric: took 172.377217ms to wait for pod list to return data ...
	I0920 17:56:33.959277  283424 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:56:33.994851  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:34.146795  283424 default_sa.go:45] found service account: "default"
	I0920 17:56:34.146873  283424 default_sa.go:55] duration metric: took 187.567128ms for default service account to be created ...
	I0920 17:56:34.146900  283424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:56:34.366466  283424 system_pods.go:86] 17 kube-system pods found
	I0920 17:56:34.366549  283424 system_pods.go:89] "coredns-7c65d6cfc9-7h69s" [9a558d7a-0902-4311-9e03-37b7e777b7ea] Running
	I0920 17:56:34.366578  283424 system_pods.go:89] "csi-hostpath-attacher-0" [1e6a1806-5a0e-4d9b-8226-6dd15458d077] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:56:34.366619  283424 system_pods.go:89] "csi-hostpath-resizer-0" [dea5489d-d945-4d45-ac46-bd6ec45c5d38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:56:34.366646  283424 system_pods.go:89] "csi-hostpathplugin-jtnb9" [4845c672-f105-4f74-a139-938c1552c61b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:56:34.366667  283424 system_pods.go:89] "etcd-addons-850577" [c828c556-c896-4188-9dea-49da081f4960] Running
	I0920 17:56:34.366685  283424 system_pods.go:89] "kube-apiserver-addons-850577" [46334579-4abf-478f-b45e-d12db5d4d6a6] Running
	I0920 17:56:34.366707  283424 system_pods.go:89] "kube-controller-manager-addons-850577" [4210bece-d382-4405-a4c0-4b72505b3388] Running
	I0920 17:56:34.366749  283424 system_pods.go:89] "kube-ingress-dns-minikube" [37bf8c14-c53d-4a81-b9d6-510e6390c934] Running
	I0920 17:56:34.366770  283424 system_pods.go:89] "kube-proxy-gpmhn" [f98ded18-6250-44d9-b128-ba3c6c921056] Running
	I0920 17:56:34.366790  283424 system_pods.go:89] "kube-scheduler-addons-850577" [992349d6-fee6-4700-a741-26e39b687ff3] Running
	I0920 17:56:34.366825  283424 system_pods.go:89] "metrics-server-84c5f94fbc-lzgjn" [a4dc3384-43e1-4ebe-8492-5adb0ce969d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:56:34.366852  283424 system_pods.go:89] "nvidia-device-plugin-daemonset-6zxks" [ac4e3a6e-15b5-407e-8aea-d28feff68e17] Running
	I0920 17:56:34.366872  283424 system_pods.go:89] "registry-66c9cd494c-8c8sz" [48080209-95a2-4f92-83d3-4a339a6b1b54] Running
	I0920 17:56:34.366893  283424 system_pods.go:89] "registry-proxy-fvmkn" [34be177b-c148-4a04-9275-afdde27c3678] Running
	I0920 17:56:34.366932  283424 system_pods.go:89] "snapshot-controller-56fcc65765-nmgdg" [553ea066-259f-4267-8358-c92e60831dd3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:56:34.366957  283424 system_pods.go:89] "snapshot-controller-56fcc65765-rct4g" [90ebe3ee-8d90-4610-9cc5-c31d34a9723e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:56:34.366974  283424 system_pods.go:89] "storage-provisioner" [c54b2527-1026-42bb-8514-949f115e15bf] Running
	I0920 17:56:34.366997  283424 system_pods.go:126] duration metric: took 220.07269ms to wait for k8s-apps to be running ...
	I0920 17:56:34.367030  283424 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:56:34.367109  283424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:56:34.406498  283424 system_svc.go:56] duration metric: took 39.469331ms WaitForService to wait for kubelet
	I0920 17:56:34.406529  283424 kubeadm.go:582] duration metric: took 38.593396515s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:56:34.406553  283424 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:56:34.449416  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:34.496947  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:34.546118  283424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 17:56:34.546152  283424 node_conditions.go:123] node cpu capacity is 2
	I0920 17:56:34.546171  283424 node_conditions.go:105] duration metric: took 139.607741ms to run NodePressure ...
	I0920 17:56:34.546198  283424 start.go:241] waiting for startup goroutines ...
	I0920 17:56:34.948149  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:34.994587  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:35.448660  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:35.494620  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:35.949242  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:35.995139  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:36.448758  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:36.494672  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:36.962490  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:36.995360  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:37.450204  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:37.495138  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:37.949241  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:37.995026  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:38.448534  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:38.494815  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:38.947920  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:38.995155  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:39.459438  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:39.561406  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:39.948595  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:40.003554  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:40.449214  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:40.496819  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:40.948517  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:40.995056  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:41.448447  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:41.494713  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:41.965070  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:41.995579  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:42.458305  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:42.496052  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:42.948244  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:42.997723  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:43.449363  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:43.494716  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:43.950153  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:43.994947  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:44.463915  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:44.572400  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:44.949126  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:44.996479  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:45.448727  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:45.494582  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:45.948593  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:45.996194  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:46.449483  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:46.494904  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:46.949305  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:46.995752  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:47.448888  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:47.494216  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:47.949517  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:47.994970  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:48.449453  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:48.550674  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:48.961188  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:49.061933  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:49.448826  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:49.549891  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:49.948326  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:49.994090  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:50.458486  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:50.495345  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:50.948602  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:50.995298  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:51.449376  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:51.495328  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:51.949374  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:52.050011  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:52.448244  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:52.496775  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:52.949026  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:52.995584  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:53.459105  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:53.497556  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:53.949300  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:53.999275  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:54.458515  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:54.494661  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:54.949075  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:54.995454  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:55.448974  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:55.494610  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:55.948398  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:55.994072  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:56.463573  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:56.496582  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:56.968323  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:56.994795  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:57.449233  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:57.498647  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:57.948849  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:57.994159  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:58.449067  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:58.494433  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:58.949612  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:58.996981  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:59.458676  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:59.558983  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:56:59.948559  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:56:59.994445  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:00.451118  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:00.517353  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:00.962997  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:00.995337  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:01.449087  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:01.495061  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:01.951979  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:01.995717  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:02.458787  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:02.494509  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:02.948559  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:02.993757  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:03.449262  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:03.495004  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:03.948652  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:03.994918  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:04.459019  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:04.495397  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:57:04.952547  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:04.994433  283424 kapi.go:107] duration metric: took 54.505059291s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:57:05.449146  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:05.948308  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:06.450210  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:06.950825  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:07.449142  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:07.948130  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:08.448340  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:08.948334  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:09.450890  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:09.948545  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:10.448093  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:10.948328  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:11.448165  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:11.949052  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:12.449633  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:12.950269  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:13.449384  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:13.950560  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:14.449601  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:14.949835  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:15.458948  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:15.949461  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:16.449035  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:16.948769  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:17.463527  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:17.949423  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:18.448668  283424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:57:18.949848  283424 kapi.go:107] duration metric: took 1m11.506140911s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:57:34.369403  283424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:57:34.369429  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:34.858281  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:35.357244  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:35.858154  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:36.357583  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:36.857378  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:37.357012  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:37.857570  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:38.357684  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:38.858620  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:39.356126  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:39.858521  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:40.357886  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:40.857737  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:41.356742  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:41.858963  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:42.357853  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:42.856881  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:43.356791  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:43.856350  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:44.357391  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:44.856922  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:45.357749  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:45.859431  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:46.357908  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:46.859843  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:47.358000  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:47.857855  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:48.357135  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:48.859529  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:49.358040  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:49.860076  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:50.356990  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:50.861191  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:51.357003  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:51.857888  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:52.357504  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:52.857180  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:53.357059  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:53.859502  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:54.356916  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:54.864837  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:55.357077  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:55.858861  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:56.360817  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:56.857183  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:57.357066  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:57.859678  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:58.356411  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:58.861181  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:59.356384  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:57:59.861380  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:00.359979  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:00.857681  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:01.356456  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:01.858688  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:02.356519  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:02.858209  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:03.356859  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:03.859218  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:04.357431  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:04.860320  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:05.356232  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:05.858404  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:06.357240  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:06.859084  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:07.356547  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:07.857571  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:08.357084  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:08.858481  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:09.356911  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:09.858149  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:10.356610  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:10.858492  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:11.357140  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:11.857699  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:12.356772  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:12.857861  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:13.357053  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:13.857305  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:14.356914  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:14.857902  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:15.357034  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:15.859300  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:16.362109  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:16.857280  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:17.357637  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:17.858659  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:18.356621  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:18.860241  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:19.357137  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:19.877854  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:20.358087  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:20.859625  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:21.356850  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:21.858932  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:22.357637  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:22.857579  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:23.357069  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:23.857922  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:24.356964  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:24.857982  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:25.356922  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:25.857634  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:26.356647  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:26.857197  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:27.357379  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:27.857736  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:28.356447  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:28.858381  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:29.356329  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:29.858567  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:30.356337  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:30.858429  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:31.357142  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:31.860072  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:32.357061  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:32.861631  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:33.357703  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:33.857924  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:34.356835  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:34.857389  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:35.357196  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:35.861314  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:36.356430  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:36.857860  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:37.356460  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:37.858160  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:38.357318  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:38.862161  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:39.357050  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:39.858411  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:40.357381  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:40.859121  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:41.357049  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:41.857623  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:42.358368  283424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:58:42.857435  283424 kapi.go:107] duration metric: took 2m30.504328972s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:58:42.860054  283424 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-850577 cluster.
	I0920 17:58:42.863069  283424 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:58:42.865687  283424 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:58:42.868464  283424 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 17:58:42.871081  283424 addons.go:510] duration metric: took 2m47.057520984s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner default-storageclass metrics-server inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 17:58:42.871144  283424 start.go:246] waiting for cluster config update ...
	I0920 17:58:42.871171  283424 start.go:255] writing updated cluster config ...
	I0920 17:58:42.871512  283424 ssh_runner.go:195] Run: rm -f paused
	I0920 17:58:43.334528  283424 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:58:43.337813  283424 out.go:177] * Done! kubectl is now configured to use "addons-850577" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 18:08:18 addons-850577 dockerd[1288]: time="2024-09-20T18:08:18.053737207Z" level=info msg="ignoring event" container=42bb617c98fe5b3a5bac8530048d557d26030c843eb1b0e0c408ea5470627180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:18 addons-850577 dockerd[1288]: time="2024-09-20T18:08:18.289346703Z" level=info msg="ignoring event" container=29de06590a5e97fb5fd981704f7bec809a13942843eb53e649b9c1c618626558 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:18 addons-850577 dockerd[1288]: time="2024-09-20T18:08:18.346170739Z" level=info msg="ignoring event" container=c87494f1c7215e410daaabaec6b9b2ebe7f9b87df449afc48138cd998b536707 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:20 addons-850577 dockerd[1288]: time="2024-09-20T18:08:20.670735863Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 20 18:08:20 addons-850577 dockerd[1288]: time="2024-09-20T18:08:20.673986011Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 20 18:08:24 addons-850577 dockerd[1288]: time="2024-09-20T18:08:24.756333361Z" level=info msg="ignoring event" container=f02bfd522ca040118dc01f93f3e93086400e3d1b58e384e4ea10796bd8c343e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:24 addons-850577 dockerd[1288]: time="2024-09-20T18:08:24.931070163Z" level=info msg="ignoring event" container=90b800dd9bf93fbe73a7eda6efa8133d1e14b77c1aa5c3f9690ad387263881c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:25 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3f97bf3dac9666b26089473fa62d95f75b165989621bab7f860437cb4ec475e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 18:08:25 addons-850577 dockerd[1288]: time="2024-09-20T18:08:25.878401778Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 20 18:08:26 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:26Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 20 18:08:26 addons-850577 dockerd[1288]: time="2024-09-20T18:08:26.565878391Z" level=info msg="ignoring event" container=c756f742dfff844d7c813feb3393ce6446a7502c6c6a1fa61beeedfb41398987 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:27 addons-850577 dockerd[1288]: time="2024-09-20T18:08:27.746764696Z" level=info msg="ignoring event" container=f3f97bf3dac9666b26089473fa62d95f75b165989621bab7f860437cb4ec475e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:29 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a999b9269ffe1d07c450dae0c7948e4b7fd74254e2daefb5a3dd0233ef273244/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 18:08:30 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:30Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 20 18:08:30 addons-850577 dockerd[1288]: time="2024-09-20T18:08:30.700515167Z" level=info msg="ignoring event" container=9394472b4acd8fbdcf18d136a44adc214e9df24bc702b32f55cdc77ad9a072ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:32 addons-850577 dockerd[1288]: time="2024-09-20T18:08:32.889394448Z" level=info msg="ignoring event" container=a999b9269ffe1d07c450dae0c7948e4b7fd74254e2daefb5a3dd0233ef273244 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:34 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea3e387b9bb0160af6f4159cf700caec5448361765dfe1e2fd99accc2d53587b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 18:08:34 addons-850577 dockerd[1288]: time="2024-09-20T18:08:34.665756409Z" level=info msg="ignoring event" container=af5c48ed52189edf7bba805d88cf48f7ac9e268d8c17c8a2df6e4fd215fe6a3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:35 addons-850577 dockerd[1288]: time="2024-09-20T18:08:35.966061751Z" level=info msg="ignoring event" container=ea3e387b9bb0160af6f4159cf700caec5448361765dfe1e2fd99accc2d53587b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:41 addons-850577 dockerd[1288]: time="2024-09-20T18:08:41.070744807Z" level=info msg="ignoring event" container=cc2cb9727eaa91716f0b0afdf52be1b30f72e161658221cc04c0e43a8bcdc172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:41 addons-850577 dockerd[1288]: time="2024-09-20T18:08:41.786413597Z" level=info msg="ignoring event" container=ab743b3168bf21c5901c6013e3eedeb2a4218eb2ac593c9f1797bd67e214b7cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:41 addons-850577 dockerd[1288]: time="2024-09-20T18:08:41.894631167Z" level=info msg="ignoring event" container=0ae58fce91b0234aafb8c21e9914a236a4ade7bf57961951420d90f04432f6be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:42 addons-850577 dockerd[1288]: time="2024-09-20T18:08:42.057239022Z" level=info msg="ignoring event" container=44d62df1c000df7db22c3f1925af70250a483f799af5e37e6f04b926186d981c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:08:42 addons-850577 cri-dockerd[1546]: time="2024-09-20T18:08:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-fvmkn_kube-system\": unexpected command output nsenter: cannot open /proc/3613/ns/net: No such file or directory\n with error: exit status 1"
	Sep 20 18:08:42 addons-850577 dockerd[1288]: time="2024-09-20T18:08:42.378901946Z" level=info msg="ignoring event" container=15e875e274ee8e563d60e4ce52fa127809b59d532d0aa9c1672dea5f2653bf60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af5c48ed52189       fc9db2894f4e4                                                                                                                9 seconds ago       Exited              helper-pod                0                   ea3e387b9bb01       helper-pod-delete-pvc-f345ef47-0969-4cae-a23c-0960456ac5a9
	9394472b4acd8       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              13 seconds ago      Exited              busybox                   0                   a999b9269ffe1       test-local-path
	cb9144005732b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            43 seconds ago      Exited              gadget                    7                   6dc6e4b8b08fb       gadget-72nst
	f1bde31761a4f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   0c76d2f64ccdd       gcp-auth-89d5ffd79-wtttv
	6c14536a93d9e       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   3e0433bf6afbf       ingress-nginx-controller-bc57996ff-9r7sq
	567c8958e15a2       420193b27261a                                                                                                                11 minutes ago      Exited              patch                     1                   5cdc93e3d8a54       ingress-nginx-admission-patch-dgrfg
	9034b6b32e919       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   95a75a87464ea       ingress-nginx-admission-create-d6p8g
	b2630aea8d232       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   5ff619e647444       local-path-provisioner-86d989889c-wczhz
	59d07b7bbb5ac       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   2ffb82c6c7346       metrics-server-84c5f94fbc-lzgjn
	399bc310c0c81       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   303559a542e5b       kube-ingress-dns-minikube
	0ae58fce91b02       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   15e875e274ee8       registry-proxy-fvmkn
	81e12cd1b2dac       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator    0                   43b1240fed69a       cloud-spanner-emulator-5b584cc74-rjshz
	48ba5c67461f1       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   82cfdf70ec2b6       storage-provisioner
	6f1c6e5b28358       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   b210c2133221f       coredns-7c65d6cfc9-7h69s
	8df24902f2157       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   820aa048f8b68       kube-proxy-gpmhn
	0440cf4705730       279f381cb3736                                                                                                                13 minutes ago      Running             kube-controller-manager   0                   ac11da38fda35       kube-controller-manager-addons-850577
	e279c84c57659       27e3830e14027                                                                                                                13 minutes ago      Running             etcd                      0                   058e7aedcea6e       etcd-addons-850577
	b9933d5643290       7f8aa378bb47d                                                                                                                13 minutes ago      Running             kube-scheduler            0                   1c1759dc9888c       kube-scheduler-addons-850577
	d24680d8b6670       d3f53a98c0a9d                                                                                                                13 minutes ago      Running             kube-apiserver            0                   74b068e8b79bb       kube-apiserver-addons-850577
	
	
	==> controller_ingress [6c14536a93d9] <==
	W0920 17:57:17.360165       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0920 17:57:17.360479       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0920 17:57:17.378376       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0920 17:57:17.804569       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0920 17:57:17.827716       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0920 17:57:17.838092       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0920 17:57:17.871945       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d5ae74a7-8c87-4b0f-b498-27cfb727cf99", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0920 17:57:17.878082       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"4122a3f7-cefb-44e5-afc7-3d2a42768c05", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 17:57:17.878170       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d8e69daa-cb9f-4a6b-95ee-94eff9538bd9", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 17:57:19.041717       6 nginx.go:317] "Starting NGINX process"
	I0920 17:57:19.041933       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 17:57:19.042582       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 17:57:19.042907       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 17:57:19.062287       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 17:57:19.063503       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-9r7sq"
	I0920 17:57:19.069816       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-9r7sq" node="addons-850577"
	I0920 17:57:19.098862       6 controller.go:213] "Backend successfully reloaded"
	I0920 17:57:19.099045       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 17:57:19.099462       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9r7sq", UID:"46e78e03-ff68-4691-8e62-89e4d6375b24", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [6f1c6e5b2835] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46447 - 29647 "HINFO IN 2165942655590568091.7520587168380827109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004666659s
	[INFO] 10.244.0.7:60227 - 23748 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000326196s
	[INFO] 10.244.0.7:60227 - 60608 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000208644s
	[INFO] 10.244.0.7:57680 - 10623 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193858s
	[INFO] 10.244.0.7:57680 - 12803 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096983s
	[INFO] 10.244.0.7:55798 - 25193 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100847s
	[INFO] 10.244.0.7:55798 - 27222 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112506s
	[INFO] 10.244.0.7:52421 - 6955 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104301s
	[INFO] 10.244.0.7:52421 - 40229 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096474s
	[INFO] 10.244.0.7:57043 - 3415 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002606444s
	[INFO] 10.244.0.7:57043 - 47705 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002563868s
	[INFO] 10.244.0.7:35399 - 63224 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088039s
	[INFO] 10.244.0.7:35399 - 30966 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000221181s
	[INFO] 10.244.0.25:38476 - 14519 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286386s
	[INFO] 10.244.0.25:52326 - 48525 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153466s
	[INFO] 10.244.0.25:47119 - 39882 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000228623s
	[INFO] 10.244.0.25:43140 - 49227 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112548s
	[INFO] 10.244.0.25:45376 - 15040 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000283883s
	[INFO] 10.244.0.25:40073 - 61313 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114877s
	[INFO] 10.244.0.25:59465 - 5287 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0031376s
	[INFO] 10.244.0.25:53399 - 52561 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002925477s
	[INFO] 10.244.0.25:57179 - 7042 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0041789s
	[INFO] 10.244.0.25:52302 - 38193 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005090815s
	
	
	==> describe nodes <==
	Name:               addons-850577
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-850577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-850577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_55_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-850577
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:55:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-850577
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:08:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:04:31 +0000   Fri, 20 Sep 2024 17:55:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:04:31 +0000   Fri, 20 Sep 2024 17:55:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:04:31 +0000   Fri, 20 Sep 2024 17:55:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:04:31 +0000   Fri, 20 Sep 2024 17:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-850577
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dac8d98d5be146bab3e0b9a96da0cb9c
	  System UUID:                811a88e0-455f-4a45-868d-205d0da4cf1f
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-5b584cc74-rjshz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-72nst                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-wtttv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-9r7sq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-7h69s                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-850577                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-850577                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-850577       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gpmhn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-850577                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-lzgjn             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-wczhz     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-850577 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-850577 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-850577 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-850577 event: Registered Node addons-850577 in Controller
	
	
	==> dmesg <==
	[Sep20 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015881] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.524754] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.847718] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.590447] kauditd_printk_skb: 36 callbacks suppressed
	[Sep20 16:50] FS-Cache: Duplicate cookie detected
	[  +0.000825] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001034] FS-Cache: O-cookie d=000000008ea1ce8f{9P.session} n=000000001a8428bc
	[  +0.001205] FS-Cache: O-key=[10] '34323935333931393837'
	[  +0.000859] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=000000008ea1ce8f{9P.session} n=00000000d3d4e3e3
	[  +0.001174] FS-Cache: N-key=[10] '34323935333931393837'
	[Sep20 16:51] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep20 16:52] hrtimer: interrupt took 16185074 ns
	[Sep20 17:26] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [e279c84c5765] <==
	{"level":"info","ts":"2024-09-20T17:55:43.598703Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:55:43.599337Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:55:43.848746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:43.848975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:43.849100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T17:55:43.849209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:43.849286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:43.849397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:43.849478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:55:43.852821Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:43.856930Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-850577 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:55:43.857132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:55:43.857666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:43.857872Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:43.857978Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:55:43.857696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:55:43.858403Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:55:43.858851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:55:43.868929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T17:55:43.859561Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:55:43.869311Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:55:43.872739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:05:45.521388Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
	{"level":"info","ts":"2024-09-20T18:05:45.564340Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"42.306509ms","hash":2504355425,"current-db-size-bytes":8687616,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4902912,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-20T18:05:45.564393Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2504355425,"revision":1862,"compact-revision":-1}
	
	
	==> gcp-auth [f1bde31761a4] <==
	2024/09/20 17:58:42 GCP Auth Webhook started!
	2024/09/20 17:58:59 Ready to marshal response ...
	2024/09/20 17:58:59 Ready to write response ...
	2024/09/20 17:59:00 Ready to marshal response ...
	2024/09/20 17:59:00 Ready to write response ...
	2024/09/20 17:59:24 Ready to marshal response ...
	2024/09/20 17:59:24 Ready to write response ...
	2024/09/20 17:59:25 Ready to marshal response ...
	2024/09/20 17:59:25 Ready to write response ...
	2024/09/20 17:59:25 Ready to marshal response ...
	2024/09/20 17:59:25 Ready to write response ...
	2024/09/20 18:07:40 Ready to marshal response ...
	2024/09/20 18:07:40 Ready to write response ...
	2024/09/20 18:07:46 Ready to marshal response ...
	2024/09/20 18:07:46 Ready to write response ...
	2024/09/20 18:08:01 Ready to marshal response ...
	2024/09/20 18:08:01 Ready to write response ...
	2024/09/20 18:08:25 Ready to marshal response ...
	2024/09/20 18:08:25 Ready to write response ...
	2024/09/20 18:08:25 Ready to marshal response ...
	2024/09/20 18:08:25 Ready to write response ...
	2024/09/20 18:08:33 Ready to marshal response ...
	2024/09/20 18:08:33 Ready to write response ...
	
	
	==> kernel <==
	 18:08:43 up  1:51,  0 users,  load average: 2.95, 1.47, 1.97
	Linux addons-850577 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d24680d8b667] <==
	I0920 17:59:15.658864       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:59:15.957574       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:59:16.001839       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:59:16.227141       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 17:59:16.378403       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 17:59:16.658995       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 17:59:16.783703       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 17:59:16.823625       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 17:59:16.910709       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 17:59:17.239069       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 17:59:17.339486       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 18:07:53.929744       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:08:17.755929       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:08:17.755996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:08:17.780306       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:08:17.780369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:08:17.810218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:08:17.810278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:08:17.950482       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:08:17.950546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:08:17.992587       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:08:17.993781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 18:08:18.950923       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 18:08:18.993297       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 18:08:18.998597       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0440cf470573] <==
	I0920 18:08:25.181235       1 shared_informer.go:320] Caches are synced for resource quota
	W0920 18:08:25.214853       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:25.214908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:08:25.468302       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 18:08:25.468506       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 18:08:26.702369       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:26.702425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:27.596056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:27.596111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:28.545845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:28.545891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:31.566215       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:31.566264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:08:34.502027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="9.001µs"
	W0920 18:08:35.919013       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:35.919068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:37.874038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:37.874081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:38.739004       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:38.739050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:08:39.370146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:39.370194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:08:41.701271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.826µs"
	W0920 18:08:42.184489       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:08:42.184555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8df24902f215] <==
	I0920 17:55:57.219003       1 server_linux.go:66] "Using iptables proxy"
	I0920 17:55:57.311066       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 17:55:57.311131       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:55:57.375285       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 17:55:57.375358       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:55:57.380403       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:55:57.380759       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:55:57.380775       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:55:57.382541       1 config.go:199] "Starting service config controller"
	I0920 17:55:57.382567       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:55:57.382595       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:55:57.382599       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:55:57.386499       1 config.go:328] "Starting node config controller"
	I0920 17:55:57.386515       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:55:57.482846       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:55:57.482911       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:55:57.486674       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b9933d564329] <==
	W0920 17:55:47.904473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:55:47.904595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:47.904725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:55:47.904872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:47.905226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:55:47.905369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:47.905461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:55:47.905546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:47.905648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:55:47.905743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:47.905840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:55:47.905925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.720843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:55:48.721138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.746651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:55:48.746781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.746908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:55:48.746962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.803865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:55:48.803905       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.817465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:55:48.817569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:55:48.842323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:55:48.842603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 17:55:49.388584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.133706    2351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997-kube-api-access-6pcds" (OuterVolumeSpecName: "kube-api-access-6pcds") pod "6a432d9a-a7dc-4eee-b82a-b9bf9d04b997" (UID: "6a432d9a-a7dc-4eee-b82a-b9bf9d04b997"). InnerVolumeSpecName "kube-api-access-6pcds". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.229154    2351 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997-gcp-creds\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.229202    2351 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997-data\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.229212    2351 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997-script\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.229223    2351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6pcds\" (UniqueName: \"kubernetes.io/projected/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997-kube-api-access-6pcds\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:36 addons-850577 kubelet[2351]: I0920 18:08:36.895642    2351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea3e387b9bb0160af6f4159cf700caec5448361765dfe1e2fd99accc2d53587b"
	Sep 20 18:08:40 addons-850577 kubelet[2351]: I0920 18:08:40.465862    2351 scope.go:117] "RemoveContainer" containerID="cb9144005732b0b4023d89552ae77410821ccd8e9901dcd7e971e3510fd08258"
	Sep 20 18:08:40 addons-850577 kubelet[2351]: E0920 18:08:40.466502    2351 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-72nst_gadget(f6786842-da0a-41e3-aa14-f99bd8e3655a)\"" pod="gadget/gadget-72nst" podUID="f6786842-da0a-41e3-aa14-f99bd8e3655a"
	Sep 20 18:08:40 addons-850577 kubelet[2351]: I0920 18:08:40.475966    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a432d9a-a7dc-4eee-b82a-b9bf9d04b997" path="/var/lib/kubelet/pods/6a432d9a-a7dc-4eee-b82a-b9bf9d04b997/volumes"
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.275806    2351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/25ec6de7-555e-4a19-af2f-2684fc2e84a8-gcp-creds\") pod \"25ec6de7-555e-4a19-af2f-2684fc2e84a8\" (UID: \"25ec6de7-555e-4a19-af2f-2684fc2e84a8\") "
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.275963    2351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8klmr\" (UniqueName: \"kubernetes.io/projected/25ec6de7-555e-4a19-af2f-2684fc2e84a8-kube-api-access-8klmr\") pod \"25ec6de7-555e-4a19-af2f-2684fc2e84a8\" (UID: \"25ec6de7-555e-4a19-af2f-2684fc2e84a8\") "
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.275900    2351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ec6de7-555e-4a19-af2f-2684fc2e84a8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "25ec6de7-555e-4a19-af2f-2684fc2e84a8" (UID: "25ec6de7-555e-4a19-af2f-2684fc2e84a8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.276733    2351 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/25ec6de7-555e-4a19-af2f-2684fc2e84a8-gcp-creds\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.283314    2351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ec6de7-555e-4a19-af2f-2684fc2e84a8-kube-api-access-8klmr" (OuterVolumeSpecName: "kube-api-access-8klmr") pod "25ec6de7-555e-4a19-af2f-2684fc2e84a8" (UID: "25ec6de7-555e-4a19-af2f-2684fc2e84a8"). InnerVolumeSpecName "kube-api-access-8klmr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:08:41 addons-850577 kubelet[2351]: I0920 18:08:41.377295    2351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8klmr\" (UniqueName: \"kubernetes.io/projected/25ec6de7-555e-4a19-af2f-2684fc2e84a8-kube-api-access-8klmr\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.289052    2351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vplmj\" (UniqueName: \"kubernetes.io/projected/48080209-95a2-4f92-83d3-4a339a6b1b54-kube-api-access-vplmj\") pod \"48080209-95a2-4f92-83d3-4a339a6b1b54\" (UID: \"48080209-95a2-4f92-83d3-4a339a6b1b54\") "
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.294740    2351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48080209-95a2-4f92-83d3-4a339a6b1b54-kube-api-access-vplmj" (OuterVolumeSpecName: "kube-api-access-vplmj") pod "48080209-95a2-4f92-83d3-4a339a6b1b54" (UID: "48080209-95a2-4f92-83d3-4a339a6b1b54"). InnerVolumeSpecName "kube-api-access-vplmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.331440    2351 scope.go:117] "RemoveContainer" containerID="ab743b3168bf21c5901c6013e3eedeb2a4218eb2ac593c9f1797bd67e214b7cf"
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.390195    2351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vplmj\" (UniqueName: \"kubernetes.io/projected/48080209-95a2-4f92-83d3-4a339a6b1b54-kube-api-access-vplmj\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.486035    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ec6de7-555e-4a19-af2f-2684fc2e84a8" path="/var/lib/kubelet/pods/25ec6de7-555e-4a19-af2f-2684fc2e84a8/volumes"
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.486316    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48080209-95a2-4f92-83d3-4a339a6b1b54" path="/var/lib/kubelet/pods/48080209-95a2-4f92-83d3-4a339a6b1b54/volumes"
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.591608    2351 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrfx\" (UniqueName: \"kubernetes.io/projected/34be177b-c148-4a04-9275-afdde27c3678-kube-api-access-wwrfx\") pod \"34be177b-c148-4a04-9275-afdde27c3678\" (UID: \"34be177b-c148-4a04-9275-afdde27c3678\") "
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.594095    2351 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34be177b-c148-4a04-9275-afdde27c3678-kube-api-access-wwrfx" (OuterVolumeSpecName: "kube-api-access-wwrfx") pod "34be177b-c148-4a04-9275-afdde27c3678" (UID: "34be177b-c148-4a04-9275-afdde27c3678"). InnerVolumeSpecName "kube-api-access-wwrfx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:08:42 addons-850577 kubelet[2351]: I0920 18:08:42.692667    2351 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wwrfx\" (UniqueName: \"kubernetes.io/projected/34be177b-c148-4a04-9275-afdde27c3678-kube-api-access-wwrfx\") on node \"addons-850577\" DevicePath \"\""
	Sep 20 18:08:43 addons-850577 kubelet[2351]: I0920 18:08:43.355253    2351 scope.go:117] "RemoveContainer" containerID="0ae58fce91b0234aafb8c21e9914a236a4ade7bf57961951420d90f04432f6be"
	
	
	==> storage-provisioner [48ba5c67461f] <==
	I0920 17:56:03.431077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:56:03.452085       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:56:03.452136       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:56:03.473567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:56:03.477996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-850577_72358eea-0e46-48b6-87a8-3a0dbb56c58e!
	I0920 17:56:03.478066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b43ea81-2cf6-4cdb-a253-10a333a9bc53", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-850577_72358eea-0e46-48b6-87a8-3a0dbb56c58e became leader
	I0920 17:56:03.582156       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-850577_72358eea-0e46-48b6-87a8-3a0dbb56c58e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-850577 -n addons-850577
helpers_test.go:261: (dbg) Run:  kubectl --context addons-850577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-d6p8g ingress-nginx-admission-patch-dgrfg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-850577 describe pod busybox ingress-nginx-admission-create-d6p8g ingress-nginx-admission-patch-dgrfg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-850577 describe pod busybox ingress-nginx-admission-create-d6p8g ingress-nginx-admission-patch-dgrfg: exit status 1 (110.609984ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-850577/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 17:59:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9jgm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c9jgm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-850577
	  Warning  Failed     7m55s (x6 over 9m18s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m42s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m11s (x21 over 9m18s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d6p8g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dgrfg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-850577 describe pod busybox ingress-nginx-admission-create-d6p8g ingress-nginx-admission-patch-dgrfg: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.80s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.03
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 7.55
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 93.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 224.67
29 TestAddons/serial/Volcano 41.44
31 TestAddons/serial/GCPAuth/Namespaces 0.21
34 TestAddons/parallel/Ingress 19.62
35 TestAddons/parallel/InspektorGadget 12
36 TestAddons/parallel/MetricsServer 6.77
38 TestAddons/parallel/CSI 37.84
39 TestAddons/parallel/Headlamp 16.21
40 TestAddons/parallel/CloudSpanner 5.55
41 TestAddons/parallel/LocalPath 52.72
42 TestAddons/parallel/NvidiaDevicePlugin 6.49
43 TestAddons/parallel/Yakd 11.77
44 TestAddons/StoppedEnableDisable 11.29
45 TestCertOptions 37.78
46 TestCertExpiration 256.96
47 TestDockerFlags 44.78
48 TestForceSystemdFlag 48.92
49 TestForceSystemdEnv 46.49
55 TestErrorSpam/setup 30.37
56 TestErrorSpam/start 0.81
57 TestErrorSpam/status 1.15
58 TestErrorSpam/pause 1.48
59 TestErrorSpam/unpause 1.47
60 TestErrorSpam/stop 2.13
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 69.23
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 39.46
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
72 TestFunctional/serial/CacheCmd/cache/add_local 1.08
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.17
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
80 TestFunctional/serial/ExtraConfig 44.73
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.25
83 TestFunctional/serial/LogsFileCmd 1.28
84 TestFunctional/serial/InvalidService 4.67
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 11.82
88 TestFunctional/parallel/DryRun 0.44
89 TestFunctional/parallel/InternationalLanguage 0.2
90 TestFunctional/parallel/StatusCmd 1.14
94 TestFunctional/parallel/ServiceCmdConnect 13.7
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 27.08
98 TestFunctional/parallel/SSHCmd 0.72
99 TestFunctional/parallel/CpCmd 2.58
101 TestFunctional/parallel/FileSync 0.38
102 TestFunctional/parallel/CertSync 2.19
106 TestFunctional/parallel/NodeLabels 0.12
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
110 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/ProfileCmd/profile_list 0.44
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
126 TestFunctional/parallel/ServiceCmd/List 0.65
127 TestFunctional/parallel/MountCmd/any-port 8.76
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
130 TestFunctional/parallel/ServiceCmd/Format 0.53
131 TestFunctional/parallel/ServiceCmd/URL 0.51
132 TestFunctional/parallel/MountCmd/specific-port 2.9
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.52
134 TestFunctional/parallel/Version/short 0.1
135 TestFunctional/parallel/Version/components 1.16
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
141 TestFunctional/parallel/ImageCommands/Setup 0.74
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
146 TestFunctional/parallel/DockerEnv/bash 1.4
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 135.41
160 TestMultiControlPlane/serial/DeployApp 42.2
161 TestMultiControlPlane/serial/PingHostFromPods 1.81
162 TestMultiControlPlane/serial/AddWorkerNode 28.81
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
165 TestMultiControlPlane/serial/CopyFile 21.75
166 TestMultiControlPlane/serial/StopSecondaryNode 11.92
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
168 TestMultiControlPlane/serial/RestartSecondaryNode 66.84
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 254.91
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.93
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.85
173 TestMultiControlPlane/serial/StopCluster 33.03
174 TestMultiControlPlane/serial/RestartCluster 167.68
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
176 TestMultiControlPlane/serial/AddSecondaryNode 50.74
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.18
180 TestImageBuild/serial/Setup 35.34
181 TestImageBuild/serial/NormalBuild 2.22
182 TestImageBuild/serial/BuildWithBuildArg 1.13
183 TestImageBuild/serial/BuildWithDockerIgnore 1.11
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.15
188 TestJSONOutput/start/Command 78.87
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.66
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.58
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.84
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.23
213 TestKicCustomNetwork/create_custom_network 40.48
214 TestKicCustomNetwork/use_default_bridge_network 35.98
215 TestKicExistingNetwork 34.36
216 TestKicCustomSubnet 34.87
217 TestKicStaticIP 38.36
218 TestMainNoArgs 0.07
219 TestMinikubeProfile 74.9
222 TestMountStart/serial/StartWithMountFirst 8.81
223 TestMountStart/serial/VerifyMountFirst 0.28
224 TestMountStart/serial/StartWithMountSecond 8.73
225 TestMountStart/serial/VerifyMountSecond 0.29
226 TestMountStart/serial/DeleteFirst 1.52
227 TestMountStart/serial/VerifyMountPostDelete 0.37
228 TestMountStart/serial/Stop 1.26
229 TestMountStart/serial/RestartStopped 8.91
230 TestMountStart/serial/VerifyMountPostStop 0.26
233 TestMultiNode/serial/FreshStart2Nodes 84.64
234 TestMultiNode/serial/DeployApp2Nodes 37
235 TestMultiNode/serial/PingHostFrom2Pods 1.11
236 TestMultiNode/serial/AddNode 18.31
237 TestMultiNode/serial/MultiNodeLabels 0.12
238 TestMultiNode/serial/ProfileList 0.78
239 TestMultiNode/serial/CopyFile 10.68
240 TestMultiNode/serial/StopNode 2.35
241 TestMultiNode/serial/StartAfterStop 11.52
242 TestMultiNode/serial/RestartKeepsNodes 117.36
243 TestMultiNode/serial/DeleteNode 6.04
244 TestMultiNode/serial/StopMultiNode 21.57
245 TestMultiNode/serial/RestartMultiNode 52.4
246 TestMultiNode/serial/ValidateNameConflict 36.91
251 TestPreload 115.94
253 TestScheduledStopUnix 108.76
254 TestSkaffold 129.41
256 TestInsufficientStorage 12.52
257 TestRunningBinaryUpgrade 126.02
259 TestKubernetesUpgrade 140.71
260 TestMissingContainerUpgrade 126.61
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 49.79
264 TestNoKubernetes/serial/StartWithStopK8s 19.77
265 TestNoKubernetes/serial/Start 10.62
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
267 TestNoKubernetes/serial/ProfileList 1.15
268 TestNoKubernetes/serial/Stop 1.22
269 TestNoKubernetes/serial/StartNoArgs 8.24
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.58
282 TestStoppedBinaryUpgrade/Setup 0.92
283 TestStoppedBinaryUpgrade/Upgrade 131.15
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
293 TestPause/serial/Start 85.97
294 TestNetworkPlugins/group/auto/Start 52.5
295 TestPause/serial/SecondStartNoReconfiguration 36.53
296 TestNetworkPlugins/group/auto/KubeletFlags 0.3
297 TestNetworkPlugins/group/auto/NetCatPod 11.31
298 TestPause/serial/Pause 0.64
299 TestPause/serial/VerifyStatus 0.37
300 TestPause/serial/Unpause 0.57
301 TestPause/serial/PauseAgain 0.78
302 TestPause/serial/DeletePaused 2.12
303 TestPause/serial/VerifyDeletedResources 0.4
304 TestNetworkPlugins/group/kindnet/Start 75.5
305 TestNetworkPlugins/group/auto/DNS 0.34
306 TestNetworkPlugins/group/auto/Localhost 0.25
307 TestNetworkPlugins/group/auto/HairPin 0.22
308 TestNetworkPlugins/group/calico/Start 85.68
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
311 TestNetworkPlugins/group/kindnet/NetCatPod 13.36
312 TestNetworkPlugins/group/kindnet/DNS 0.36
313 TestNetworkPlugins/group/kindnet/Localhost 0.26
314 TestNetworkPlugins/group/kindnet/HairPin 0.29
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/custom-flannel/Start 64.73
317 TestNetworkPlugins/group/calico/KubeletFlags 0.39
318 TestNetworkPlugins/group/calico/NetCatPod 13.44
319 TestNetworkPlugins/group/calico/DNS 0.26
320 TestNetworkPlugins/group/calico/Localhost 0.24
321 TestNetworkPlugins/group/calico/HairPin 0.31
322 TestNetworkPlugins/group/false/Start 80.63
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.43
325 TestNetworkPlugins/group/custom-flannel/DNS 0.26
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.33
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
328 TestNetworkPlugins/group/enable-default-cni/Start 48.65
329 TestNetworkPlugins/group/false/KubeletFlags 0.4
330 TestNetworkPlugins/group/false/NetCatPod 13.4
331 TestNetworkPlugins/group/false/DNS 0.21
332 TestNetworkPlugins/group/false/Localhost 0.19
333 TestNetworkPlugins/group/false/HairPin 0.39
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.39
336 TestNetworkPlugins/group/flannel/Start 62.1
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
340 TestNetworkPlugins/group/bridge/Start 81.44
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
343 TestNetworkPlugins/group/flannel/NetCatPod 10.53
344 TestNetworkPlugins/group/flannel/DNS 0.26
345 TestNetworkPlugins/group/flannel/Localhost 0.2
346 TestNetworkPlugins/group/flannel/HairPin 0.18
347 TestNetworkPlugins/group/kubenet/Start 87.52
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
349 TestNetworkPlugins/group/bridge/NetCatPod 10.52
350 TestNetworkPlugins/group/bridge/DNS 0.22
351 TestNetworkPlugins/group/bridge/Localhost 0.22
352 TestNetworkPlugins/group/bridge/HairPin 0.28
354 TestStartStop/group/old-k8s-version/serial/FirstStart 180.5
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
356 TestNetworkPlugins/group/kubenet/NetCatPod 12.45
357 TestNetworkPlugins/group/kubenet/DNS 0.3
358 TestNetworkPlugins/group/kubenet/Localhost 0.22
359 TestNetworkPlugins/group/kubenet/HairPin 0.3
361 TestStartStop/group/no-preload/serial/FirstStart 83.85
362 TestStartStop/group/no-preload/serial/DeployApp 10.39
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
364 TestStartStop/group/no-preload/serial/Stop 11.17
365 TestStartStop/group/old-k8s-version/serial/DeployApp 8.64
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.68
367 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
368 TestStartStop/group/no-preload/serial/SecondStart 290.9
369 TestStartStop/group/old-k8s-version/serial/Stop 11.59
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
371 TestStartStop/group/old-k8s-version/serial/SecondStart 376.95
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/no-preload/serial/Pause 3.21
377 TestStartStop/group/embed-certs/serial/FirstStart 74.56
378 TestStartStop/group/embed-certs/serial/DeployApp 9.39
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
381 TestStartStop/group/embed-certs/serial/Stop 11.34
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/old-k8s-version/serial/Pause 2.91
385 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
386 TestStartStop/group/embed-certs/serial/SecondStart 293.57
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.23
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.82
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
397 TestStartStop/group/embed-certs/serial/Pause 3.13
399 TestStartStop/group/newest-cni/serial/FirstStart 38.17
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
402 TestStartStop/group/newest-cni/serial/Stop 5.9
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
404 TestStartStop/group/newest-cni/serial/SecondStart 19.62
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
408 TestStartStop/group/newest-cni/serial/Pause 3.71
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.05
x
+
TestDownloadOnly/v1.20.0/json-events (8.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-997842 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-997842 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.032305462s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 17:54:48.885679  282659 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 17:54:48.885764  282659 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-997842
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-997842: exit status 85 (90.276726ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-997842 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |          |
	|         | -p download-only-997842        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:54:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:54:40.901639  282664 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:54:40.901884  282664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:40.901914  282664 out.go:358] Setting ErrFile to fd 2...
	I0920 17:54:40.901935  282664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:40.902255  282664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	W0920 17:54:40.902446  282664 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19679-277267/.minikube/config/config.json: open /home/jenkins/minikube-integration/19679-277267/.minikube/config/config.json: no such file or directory
	I0920 17:54:40.902966  282664 out.go:352] Setting JSON to true
	I0920 17:54:40.904186  282664 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5830,"bootTime":1726849051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 17:54:40.904293  282664 start.go:139] virtualization:  
	I0920 17:54:40.908046  282664 out.go:97] [download-only-997842] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 17:54:40.908224  282664 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:54:40.908327  282664 notify.go:220] Checking for updates...
	I0920 17:54:40.911248  282664 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:54:40.914113  282664 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:54:40.916905  282664 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 17:54:40.919646  282664 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	I0920 17:54:40.922540  282664 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 17:54:40.927984  282664 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:54:40.928277  282664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:54:40.963979  282664 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:54:40.964098  282664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:41.020780  282664 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:54:41.010604796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:41.020892  282664 docker.go:318] overlay module found
	I0920 17:54:41.023728  282664 out.go:97] Using the docker driver based on user configuration
	I0920 17:54:41.023769  282664 start.go:297] selected driver: docker
	I0920 17:54:41.023777  282664 start.go:901] validating driver "docker" against <nil>
	I0920 17:54:41.023897  282664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:41.077895  282664 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:54:41.067867402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:41.078125  282664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:54:41.078464  282664 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 17:54:41.078624  282664 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:54:41.081580  282664 out.go:169] Using Docker driver with root privileges
	I0920 17:54:41.084294  282664 cni.go:84] Creating CNI manager for ""
	I0920 17:54:41.084376  282664 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 17:54:41.084487  282664 start.go:340] cluster config:
	{Name:download-only-997842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-997842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:54:41.087466  282664 out.go:97] Starting "download-only-997842" primary control-plane node in "download-only-997842" cluster
	I0920 17:54:41.087506  282664 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:54:41.090167  282664 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 17:54:41.090218  282664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:54:41.090329  282664 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 17:54:41.106025  282664 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:54:41.106211  282664 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 17:54:41.106324  282664 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:54:41.183922  282664 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 17:54:41.183953  282664 cache.go:56] Caching tarball of preloaded images
	I0920 17:54:41.184140  282664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:54:41.187104  282664 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 17:54:41.187140  282664 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 17:54:41.342893  282664 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-997842 host does not exist
	  To start a cluster, run: "minikube start -p download-only-997842"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-997842
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-348035 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-348035 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.550438651s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 17:54:56.868232  282659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:54:56.868313  282659 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-348035
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-348035: exit status 85 (69.939347ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-997842 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | -p download-only-997842        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| delete  | -p download-only-997842        | download-only-997842 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC | 20 Sep 24 17:54 UTC |
	| start   | -o=json --download-only        | download-only-348035 | jenkins | v1.34.0 | 20 Sep 24 17:54 UTC |                     |
	|         | -p download-only-348035        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:54:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:54:49.359030  282872 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:54:49.359295  282872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:49.359326  282872 out.go:358] Setting ErrFile to fd 2...
	I0920 17:54:49.359349  282872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:54:49.359712  282872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 17:54:49.360234  282872 out.go:352] Setting JSON to true
	I0920 17:54:49.361373  282872 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5838,"bootTime":1726849051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 17:54:49.361498  282872 start.go:139] virtualization:  
	I0920 17:54:49.364553  282872 out.go:97] [download-only-348035] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:54:49.364888  282872 notify.go:220] Checking for updates...
	I0920 17:54:49.367877  282872 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:54:49.370535  282872 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:54:49.372936  282872 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 17:54:49.375257  282872 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	I0920 17:54:49.377623  282872 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 17:54:49.382805  282872 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:54:49.383080  282872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:54:49.404560  282872 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:54:49.404666  282872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:49.458599  282872 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:54:49.447931201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:49.458719  282872 docker.go:318] overlay module found
	I0920 17:54:49.461362  282872 out.go:97] Using the docker driver based on user configuration
	I0920 17:54:49.461400  282872 start.go:297] selected driver: docker
	I0920 17:54:49.461409  282872 start.go:901] validating driver "docker" against <nil>
	I0920 17:54:49.461531  282872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:54:49.522960  282872 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:54:49.502439805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:54:49.523195  282872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:54:49.523466  282872 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 17:54:49.523625  282872 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:54:49.526342  282872 out.go:169] Using Docker driver with root privileges
	I0920 17:54:49.528827  282872 cni.go:84] Creating CNI manager for ""
	I0920 17:54:49.528908  282872 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:54:49.528935  282872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:54:49.529029  282872 start.go:340] cluster config:
	{Name:download-only-348035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-348035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:54:49.531768  282872 out.go:97] Starting "download-only-348035" primary control-plane node in "download-only-348035" cluster
	I0920 17:54:49.531794  282872 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:54:49.534362  282872 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 17:54:49.534402  282872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:54:49.534594  282872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 17:54:49.551366  282872 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:54:49.551508  282872 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 17:54:49.551537  282872 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 17:54:49.551546  282872 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 17:54:49.551555  282872 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 17:54:49.589686  282872 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 17:54:49.589711  282872 cache.go:56] Caching tarball of preloaded images
	I0920 17:54:49.589897  282872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:54:49.592555  282872 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 17:54:49.592586  282872 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 17:54:49.673960  282872 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19679-277267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-348035 host does not exist
	  To start a cluster, run: "minikube start -p download-only-348035"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-348035
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:54:58.045743  282659 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-357659 --alsologtostderr --binary-mirror http://127.0.0.1:43129 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-357659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-357659
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (93.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-565086 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-565086 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m31.114749027s)
helpers_test.go:175: Cleaning up "offline-docker-565086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-565086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-565086: (2.429014064s)
--- PASS: TestOffline (93.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-850577
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-850577: exit status 85 (55.968955ms)

                                                
                                                
-- stdout --
	* Profile "addons-850577" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850577"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-850577
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-850577: exit status 85 (72.750398ms)

                                                
                                                
-- stdout --
	* Profile "addons-850577" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850577"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (224.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-850577 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-850577 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m44.672574679s)
--- PASS: TestAddons/Setup (224.67s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.44s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 52.535893ms
addons_test.go:843: volcano-admission stabilized in 53.718186ms
addons_test.go:835: volcano-scheduler stabilized in 55.050548ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-pz2m6" [f7a1d825-dd79-4d62-bd13-31a0d52ed265] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.010156403s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-hsjpf" [f7c984c5-3cbd-451b-a46d-d8d2a7e07e18] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.005100734s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mltkf" [54b029a7-a784-402a-b08c-994b5604bb23] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003837294s
addons_test.go:870: (dbg) Run:  kubectl --context addons-850577 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-850577 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-850577 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a207eb99-7bd3-4dbb-8b79-b7379adfc40d] Pending
helpers_test.go:344: "test-job-nginx-0" [a207eb99-7bd3-4dbb-8b79-b7379adfc40d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a207eb99-7bd3-4dbb-8b79-b7379adfc40d] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.012002273s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable volcano --alsologtostderr -v=1: (10.559768889s)
--- PASS: TestAddons/serial/Volcano (41.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-850577 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-850577 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-850577 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-850577 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-850577 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6337bbe9-2576-432c-83f0-d4490694547f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6337bbe9-2576-432c-83f0-d4490694547f] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004625743s
I0920 18:09:27.083827  282659 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-850577 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable ingress-dns --alsologtostderr -v=1: (1.174625224s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable ingress --alsologtostderr -v=1: (7.77795517s)
--- PASS: TestAddons/parallel/Ingress (19.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-72nst" [f6786842-da0a-41e3-aa14-f99bd8e3655a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005158791s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-850577
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-850577: (5.995840877s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.581842ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lzgjn" [a4dc3384-43e1-4ebe-8492-5adb0ce969d3] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004352076s
addons_test.go:413: (dbg) Run:  kubectl --context addons-850577 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:07:40.394837  282659 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:07:40.400425  282659 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:07:40.400459  282659 kapi.go:107] duration metric: took 5.634084ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.643839ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-850577 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-850577 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fa538b7e-e4fc-4e12-8675-bad7a87b407d] Pending
helpers_test.go:344: "task-pv-pod" [fa538b7e-e4fc-4e12-8675-bad7a87b407d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fa538b7e-e4fc-4e12-8675-bad7a87b407d] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00436513s
addons_test.go:528: (dbg) Run:  kubectl --context addons-850577 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-850577 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-850577 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-850577 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-850577 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-850577 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-850577 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e39a5da9-1e96-4e3e-9404-36a42b8ba25b] Pending
helpers_test.go:344: "task-pv-pod-restore" [e39a5da9-1e96-4e3e-9404-36a42b8ba25b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e39a5da9-1e96-4e3e-9404-36a42b8ba25b] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003477353s
addons_test.go:570: (dbg) Run:  kubectl --context addons-850577 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-850577 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-850577 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.740527679s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-850577 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-850577 --alsologtostderr -v=1: (1.068276592s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lpj62" [2afbb099-ae09-4e8c-a91a-577f78b54c10] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-lpj62" [2afbb099-ae09-4e8c-a91a-577f78b54c10] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lpj62" [2afbb099-ae09-4e8c-a91a-577f78b54c10] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.01184378s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable headlamp --alsologtostderr -v=1: (6.125341457s)
--- PASS: TestAddons/parallel/Headlamp (16.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-rjshz" [a0db1935-1ac0-4698-aa23-c072e7dc8d81] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003248529s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-850577
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.72s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-850577 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-850577 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c3d1a49e-8703-4e89-95d9-e6f8d0ac84f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c3d1a49e-8703-4e89-95d9-e6f8d0ac84f2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c3d1a49e-8703-4e89-95d9-e6f8d0ac84f2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003575768s
addons_test.go:938: (dbg) Run:  kubectl --context addons-850577 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 ssh "cat /opt/local-path-provisioner/pvc-f345ef47-0969-4cae-a23c-0960456ac5a9_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-850577 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-850577 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.561516822s)
--- PASS: TestAddons/parallel/LocalPath (52.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6zxks" [ac4e3a6e-15b5-407e-8aea-d28feff68e17] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005735078s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-850577
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7fjm5" [b3be9d53-bc97-4a03-b9b4-0b253b70a152] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003848263s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-850577 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-850577 addons disable yakd --alsologtostderr -v=1: (5.767434743s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-850577
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-850577: (11.022306932s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-850577
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-850577
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-850577
--- PASS: TestAddons/StoppedEnableDisable (11.29s)

                                                
                                    
x
+
TestCertOptions (37.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-892081 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-892081 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.916327125s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-892081 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-892081 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-892081 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-892081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-892081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-892081: (2.125404305s)
--- PASS: TestCertOptions (37.78s)

                                                
                                    
x
+
TestCertExpiration (256.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-564276 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-564276 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (41.028250802s)
E0920 18:51:23.880375  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-564276 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-564276 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (33.306519715s)
helpers_test.go:175: Cleaning up "cert-expiration-564276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-564276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-564276: (2.624077113s)
--- PASS: TestCertExpiration (256.96s)

                                                
                                    
x
+
TestDockerFlags (44.78s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-209980 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-209980 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.72019971s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-209980 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-209980 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-209980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-209980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-209980: (2.298585868s)
--- PASS: TestDockerFlags (44.78s)

                                                
                                    
x
+
TestForceSystemdFlag (48.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-967470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-967470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.894050769s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-967470 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-967470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-967470
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-967470: (2.638870152s)
--- PASS: TestForceSystemdFlag (48.92s)

                                                
                                    
x
+
TestForceSystemdEnv (46.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-914655 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-914655 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.305021703s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-914655 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-914655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-914655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-914655: (2.49014621s)
--- PASS: TestForceSystemdEnv (46.49s)

                                                
                                    
x
+
TestErrorSpam/setup (30.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-794778 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-794778 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-794778 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-794778 --driver=docker  --container-runtime=docker: (30.371140845s)
--- PASS: TestErrorSpam/setup (30.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 stop: (1.9243347s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-794778 --log_dir /tmp/nospam-794778 stop
--- PASS: TestErrorSpam/stop (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19679-277267/.minikube/files/etc/test/nested/copy/282659/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-163144 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m9.223452418s)
--- PASS: TestFunctional/serial/StartWithProxy (69.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:11:40.105760  282659 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-163144 --alsologtostderr -v=8: (39.461237441s)
functional_test.go:663: soft start took 39.462741736s for "functional-163144" cluster.
I0920 18:12:19.567329  282659 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (39.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-163144 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:3.1: (1.206827845s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:3.3: (1.266316697s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 cache add registry.k8s.io/pause:latest: (1.021360863s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-163144 /tmp/TestFunctionalserialCacheCmdcacheadd_local3228179548/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache add minikube-local-cache-test:functional-163144
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache delete minikube-local-cache-test:functional-163144
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-163144
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (341.146963ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 kubectl -- --context functional-163144 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-163144 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-163144 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.729760963s)
functional_test.go:761: restart took 44.729863631s for "functional-163144" cluster.
I0920 18:13:11.599778  282659 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (44.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-163144 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 logs: (1.253684439s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 logs --file /tmp/TestFunctionalserialLogsFileCmd1630009198/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 logs --file /tmp/TestFunctionalserialLogsFileCmd1630009198/001/logs.txt: (1.276191469s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-163144 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-163144
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-163144: exit status 115 (624.930569ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30507 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-163144 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 config get cpus: exit status 14 (74.211031ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 config get cpus: exit status 14 (57.499962ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-163144 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-163144 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 324084: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-163144 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.875416ms)

                                                
                                                
-- stdout --
	* [functional-163144] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:13:53.842516  323781 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:13:53.842635  323781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:53.842645  323781 out.go:358] Setting ErrFile to fd 2...
	I0920 18:13:53.842651  323781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:53.842902  323781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:13:53.843272  323781 out.go:352] Setting JSON to false
	I0920 18:13:53.844295  323781 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6983,"bootTime":1726849051,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 18:13:53.844370  323781 start.go:139] virtualization:  
	I0920 18:13:53.846394  323781 out.go:177] * [functional-163144] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:13:53.848186  323781 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:13:53.848239  323781 notify.go:220] Checking for updates...
	I0920 18:13:53.851692  323781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:13:53.853992  323781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 18:13:53.855788  323781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	I0920 18:13:53.857043  323781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:13:53.859000  323781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:13:53.861796  323781 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:13:53.862499  323781 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:13:53.886507  323781 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:13:53.886696  323781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:13:53.956019  323781 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:13:53.944024581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:13:53.956131  323781 docker.go:318] overlay module found
	I0920 18:13:53.958181  323781 out.go:177] * Using the docker driver based on existing profile
	I0920 18:13:53.960069  323781 start.go:297] selected driver: docker
	I0920 18:13:53.960092  323781 start.go:901] validating driver "docker" against &{Name:functional-163144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-163144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:53.960214  323781 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:13:53.962139  323781 out.go:201] 
	W0920 18:13:53.963592  323781 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:13:53.965217  323781 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-163144 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
E0920 18:13:53.632491  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-163144 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (201.216551ms)

                                                
                                                
-- stdout --
	* [functional-163144] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:13:53.654541  323736 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:13:53.654734  323736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:53.654764  323736 out.go:358] Setting ErrFile to fd 2...
	I0920 18:13:53.654786  323736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:53.655613  323736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:13:53.656071  323736 out.go:352] Setting JSON to false
	I0920 18:13:53.657229  323736 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6983,"bootTime":1726849051,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 18:13:53.657343  323736 start.go:139] virtualization:  
	I0920 18:13:53.659512  323736 out.go:177] * [functional-163144] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 18:13:53.661897  323736 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:13:53.661962  323736 notify.go:220] Checking for updates...
	I0920 18:13:53.665678  323736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:13:53.667355  323736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	I0920 18:13:53.669845  323736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	I0920 18:13:53.671235  323736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:13:53.673214  323736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:13:53.675505  323736 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:13:53.676128  323736 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:13:53.706486  323736 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:13:53.706612  323736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:13:53.776035  323736 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:13:53.7639831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:13:53.776149  323736 docker.go:318] overlay module found
	I0920 18:13:53.777588  323736 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 18:13:53.779270  323736 start.go:297] selected driver: docker
	I0920 18:13:53.779291  323736 start.go:901] validating driver "docker" against &{Name:functional-163144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-163144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:53.779399  323736 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:13:53.782378  323736 out.go:201] 
	W0920 18:13:53.784149  323736 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:13:53.785829  323736 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-163144 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-163144 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-c4rw2" [19a6889f-d2a0-44f8-a27a-efdb38600400] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-c4rw2" [19a6889f-d2a0-44f8-a27a-efdb38600400] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.007925077s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service hello-node-connect --url
E0920 18:13:43.377454  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:43.383751  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31054
E0920 18:13:43.395479  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1675: http://192.168.49.2:31054: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-c4rw2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31054
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c75e56e6-c17a-4f5e-b07b-123e17c5c78a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004590904s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-163144 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-163144 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-163144 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-163144 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [52f26718-ab0a-43d9-ab7b-fb9fe315d6e1] Pending
helpers_test.go:344: "sp-pod" [52f26718-ab0a-43d9-ab7b-fb9fe315d6e1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [52f26718-ab0a-43d9-ab7b-fb9fe315d6e1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004248943s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-163144 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-163144 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-163144 delete -f testdata/storage-provisioner/pod.yaml: (1.022419931s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-163144 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e073acb-3b8d-4899-a123-c9ae9c61c8f3] Pending
helpers_test.go:344: "sp-pod" [3e073acb-3b8d-4899-a123-c9ae9c61c8f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e073acb-3b8d-4899-a123-c9ae9c61c8f3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00352808s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-163144 exec sp-pod -- ls /tmp/mount
E0920 18:13:48.510385  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh -n functional-163144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cp functional-163144:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1991176752/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh -n functional-163144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh -n functional-163144 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/282659/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /etc/test/nested/copy/282659/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/282659.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /etc/ssl/certs/282659.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/282659.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /usr/share/ca-certificates/282659.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2826592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /etc/ssl/certs/2826592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2826592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /usr/share/ca-certificates/2826592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-163144 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh "sudo systemctl is-active crio": exit status 1 (301.892623ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 320883: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-163144 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [68adf69e-9200-4cdd-a20e-1ff16f8ffb34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [68adf69e-9200-4cdd-a20e-1ff16f8ffb34] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004283251s
I0920 18:13:29.296187  282659 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-163144 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.220.95 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-163144 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-163144 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
E0920 18:13:43.416903  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:43.458376  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1445: (dbg) Run:  kubectl --context functional-163144 expose deployment hello-node --type=NodePort --port=8080
E0920 18:13:43.540234  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-dr44n" [7e845059-bc91-4388-aec0-c239f49440d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0920 18:13:43.703051  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:44.024565  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-dr44n" [7e845059-bc91-4388-aec0-c239f49440d5] Running
E0920 18:13:44.666717  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:45.948741  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004525607s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.934932ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "68.069084ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "413.223243ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "80.298605ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdany-port1505063450/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726856029941521484" to /tmp/TestFunctionalparallelMountCmdany-port1505063450/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726856029941521484" to /tmp/TestFunctionalparallelMountCmdany-port1505063450/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726856029941521484" to /tmp/TestFunctionalparallelMountCmdany-port1505063450/001/test-1726856029941521484
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (455.645021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:13:50.397460  282659 retry.go:31] will retry after 527.965647ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:13 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:13 test-1726856029941521484
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh cat /mount-9p/test-1726856029941521484
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-163144 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [eec2ee7b-0a73-4184-9283-6e860235b227] Pending
helpers_test.go:344: "busybox-mount" [eec2ee7b-0a73-4184-9283-6e860235b227] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [eec2ee7b-0a73-4184-9283-6e860235b227] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [eec2ee7b-0a73-4184-9283-6e860235b227] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00424721s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-163144 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdany-port1505063450/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service list -o json
functional_test.go:1494: Took "596.565295ms" to run "out/minikube-linux-arm64 -p functional-163144 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30760
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30760
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdspecific-port2104961924/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (548.797692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:13:59.250276  282659 retry.go:31] will retry after 741.202368ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdspecific-port2104961924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh "sudo umount -f /mount-9p": exit status 1 (389.188504ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-163144 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdspecific-port2104961924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T" /mount1: exit status 1 (1.031335046s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:14:02.635293  282659 retry.go:31] will retry after 331.036436ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh "findmnt -T" /mount3
E0920 18:14:03.874376  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-163144 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-163144 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1995942717/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 version -o=json --components: (1.154944538s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-163144 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-163144
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-163144
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-163144 image ls --format short --alsologtostderr:
I0920 18:14:11.734150  326908 out.go:345] Setting OutFile to fd 1 ...
I0920 18:14:11.734299  326908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:11.734308  326908 out.go:358] Setting ErrFile to fd 2...
I0920 18:14:11.734313  326908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:11.734575  326908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
I0920 18:14:11.735246  326908 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:11.735370  326908 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:11.735878  326908 cli_runner.go:164] Run: docker container inspect functional-163144 --format={{.State.Status}}
I0920 18:14:11.763432  326908 ssh_runner.go:195] Run: systemctl --version
I0920 18:14:11.763491  326908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-163144
I0920 18:14:11.792740  326908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/functional-163144/id_rsa Username:docker}
I0920 18:14:11.898611  326908 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-163144 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-163144 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-163144 | 6710d6af740db | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-163144 image ls --format table --alsologtostderr:
I0920 18:14:12.389259  327099 out.go:345] Setting OutFile to fd 1 ...
I0920 18:14:12.389459  327099 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.389530  327099 out.go:358] Setting ErrFile to fd 2...
I0920 18:14:12.389545  327099 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.389911  327099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
I0920 18:14:12.390640  327099 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.390802  327099 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.391347  327099 cli_runner.go:164] Run: docker container inspect functional-163144 --format={{.State.Status}}
I0920 18:14:12.412574  327099 ssh_runner.go:195] Run: systemctl --version
I0920 18:14:12.412674  327099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-163144
I0920 18:14:12.435173  327099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/functional-163144/id_rsa Username:docker}
I0920 18:14:12.537664  327099 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-163144 image ls --format json --alsologtostderr:
[{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-163144"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5
e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"6710d6af740db2ccc37f211ac96f3f6d48becf966660376897239c8b8f53e755","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-163144"],"size":"30"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa
2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"siz
e":"193000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-163144 image ls --format json --alsologtostderr:
I0920 18:14:12.083519  327000 out.go:345] Setting OutFile to fd 1 ...
I0920 18:14:12.083768  327000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.083799  327000 out.go:358] Setting ErrFile to fd 2...
I0920 18:14:12.083818  327000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.084242  327000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
I0920 18:14:12.085560  327000 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.085785  327000 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.086372  327000 cli_runner.go:164] Run: docker container inspect functional-163144 --format={{.State.Status}}
I0920 18:14:12.133072  327000 ssh_runner.go:195] Run: systemctl --version
I0920 18:14:12.133220  327000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-163144
I0920 18:14:12.159525  327000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/functional-163144/id_rsa Username:docker}
I0920 18:14:12.265606  327000 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-163144 image ls --format yaml --alsologtostderr:
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-163144
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 6710d6af740db2ccc37f211ac96f3f6d48becf966660376897239c8b8f53e755
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-163144
size: "30"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-163144 image ls --format yaml --alsologtostderr:
I0920 18:14:11.820205  326939 out.go:345] Setting OutFile to fd 1 ...
I0920 18:14:11.820439  326939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:11.820468  326939 out.go:358] Setting ErrFile to fd 2...
I0920 18:14:11.820485  326939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:11.820838  326939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
I0920 18:14:11.821739  326939 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:11.821940  326939 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:11.822530  326939 cli_runner.go:164] Run: docker container inspect functional-163144 --format={{.State.Status}}
I0920 18:14:11.842029  326939 ssh_runner.go:195] Run: systemctl --version
I0920 18:14:11.842088  326939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-163144
I0920 18:14:11.862345  326939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/functional-163144/id_rsa Username:docker}
I0920 18:14:11.979814  326939 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-163144 ssh pgrep buildkitd: exit status 1 (364.550397ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image build -t localhost/my-image:functional-163144 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-163144 image build -t localhost/my-image:functional-163144 testdata/build --alsologtostderr: (2.97570973s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-163144 image build -t localhost/my-image:functional-163144 testdata/build --alsologtostderr:
I0920 18:14:12.393241  327104 out.go:345] Setting OutFile to fd 1 ...
I0920 18:14:12.393897  327104 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.393939  327104 out.go:358] Setting ErrFile to fd 2...
I0920 18:14:12.393964  327104 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:14:12.394276  327104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
I0920 18:14:12.395012  327104 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.395717  327104 config.go:182] Loaded profile config "functional-163144": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:14:12.396274  327104 cli_runner.go:164] Run: docker container inspect functional-163144 --format={{.State.Status}}
I0920 18:14:12.418788  327104 ssh_runner.go:195] Run: systemctl --version
I0920 18:14:12.418844  327104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-163144
I0920 18:14:12.451585  327104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/functional-163144/id_rsa Username:docker}
I0920 18:14:12.559074  327104 build_images.go:161] Building image from path: /tmp/build.2503808897.tar
I0920 18:14:12.559141  327104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:14:12.573205  327104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2503808897.tar
I0920 18:14:12.579509  327104 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2503808897.tar: stat -c "%s %y" /var/lib/minikube/build/build.2503808897.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2503808897.tar': No such file or directory
I0920 18:14:12.579538  327104 ssh_runner.go:362] scp /tmp/build.2503808897.tar --> /var/lib/minikube/build/build.2503808897.tar (3072 bytes)
I0920 18:14:12.606427  327104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2503808897
I0920 18:14:12.616324  327104 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2503808897 -xf /var/lib/minikube/build/build.2503808897.tar
I0920 18:14:12.626486  327104 docker.go:360] Building image: /var/lib/minikube/build/build.2503808897
I0920 18:14:12.626604  327104 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-163144 /var/lib/minikube/build/build.2503808897
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:a69c871795bb9fba64cc538e0bf2fdec75d2bfdf5bfb83cbf26fdab45fcc32dc done
#8 naming to localhost/my-image:functional-163144 done
#8 DONE 0.1s
I0920 18:14:15.252186  327104 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-163144 /var/lib/minikube/build/build.2503808897: (2.625551761s)
I0920 18:14:15.252262  327104 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2503808897
I0920 18:14:15.262938  327104 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2503808897.tar
I0920 18:14:15.274169  327104 build_images.go:217] Built localhost/my-image:functional-163144 from /tmp/build.2503808897.tar
I0920 18:14:15.274207  327104 build_images.go:133] succeeded building to: functional-163144
I0920 18:14:15.274213  327104 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-163144
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image load --daemon kicbase/echo-server:functional-163144 --alsologtostderr
2024/09/20 18:14:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-163144 docker-env) && out/minikube-linux-arm64 status -p functional-163144"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-163144 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image load --daemon kicbase/echo-server:functional-163144 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-163144
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image load --daemon kicbase/echo-server:functional-163144 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image save kicbase/echo-server:functional-163144 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image rm kicbase/echo-server:functional-163144 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-163144
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-163144 image save --daemon kicbase/echo-server:functional-163144 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-163144
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-163144
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-163144
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-163144
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (135.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-011032 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 18:14:24.355856  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:05.317296  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:27.238799  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-011032 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m14.120548025s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: (1.286494625s)
--- PASS: TestMultiControlPlane/serial/StartCluster (135.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-011032 -- rollout status deployment/busybox: (4.672452493s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:39.139788  282659 retry.go:31] will retry after 719.509554ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:40.141063  282659 retry.go:31] will retry after 2.175359402s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:42.535690  282659 retry.go:31] will retry after 2.032095071s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:44.741389  282659 retry.go:31] will retry after 4.158031792s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:49.079106  282659 retry.go:31] will retry after 5.315667432s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:16:54.582095  282659 retry.go:31] will retry after 5.640924594s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 18:17:00.941992  282659 retry.go:31] will retry after 11.750909831s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-dvmbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-qx9xj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-zhzhv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-dvmbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-qx9xj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-zhzhv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-dvmbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-qx9xj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-zhzhv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-dvmbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-dvmbb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-qx9xj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-qx9xj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-zhzhv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-011032 -- exec busybox-7dff88458-zhzhv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-011032 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-011032 -v=7 --alsologtostderr: (27.711579089s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: (1.102546699s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-011032 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048681366s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 status --output json -v=7 --alsologtostderr: (1.123811821s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp testdata/cp-test.txt ha-011032:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4077226538/001/cp-test_ha-011032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032:/home/docker/cp-test.txt ha-011032-m02:/home/docker/cp-test_ha-011032_ha-011032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test_ha-011032_ha-011032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032:/home/docker/cp-test.txt ha-011032-m03:/home/docker/cp-test_ha-011032_ha-011032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test_ha-011032_ha-011032-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032:/home/docker/cp-test.txt ha-011032-m04:/home/docker/cp-test_ha-011032_ha-011032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test_ha-011032_ha-011032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp testdata/cp-test.txt ha-011032-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4077226538/001/cp-test_ha-011032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m02:/home/docker/cp-test.txt ha-011032:/home/docker/cp-test_ha-011032-m02_ha-011032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test_ha-011032-m02_ha-011032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m02:/home/docker/cp-test.txt ha-011032-m03:/home/docker/cp-test_ha-011032-m02_ha-011032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test_ha-011032-m02_ha-011032-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m02:/home/docker/cp-test.txt ha-011032-m04:/home/docker/cp-test_ha-011032-m02_ha-011032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test_ha-011032-m02_ha-011032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp testdata/cp-test.txt ha-011032-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4077226538/001/cp-test_ha-011032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m03:/home/docker/cp-test.txt ha-011032:/home/docker/cp-test_ha-011032-m03_ha-011032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test_ha-011032-m03_ha-011032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m03:/home/docker/cp-test.txt ha-011032-m02:/home/docker/cp-test_ha-011032-m03_ha-011032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test_ha-011032-m03_ha-011032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m03:/home/docker/cp-test.txt ha-011032-m04:/home/docker/cp-test_ha-011032-m03_ha-011032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test_ha-011032-m03_ha-011032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp testdata/cp-test.txt ha-011032-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4077226538/001/cp-test_ha-011032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m04:/home/docker/cp-test.txt ha-011032:/home/docker/cp-test_ha-011032-m04_ha-011032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032 "sudo cat /home/docker/cp-test_ha-011032-m04_ha-011032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m04:/home/docker/cp-test.txt ha-011032-m02:/home/docker/cp-test_ha-011032-m04_ha-011032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m02 "sudo cat /home/docker/cp-test_ha-011032-m04_ha-011032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 cp ha-011032-m04:/home/docker/cp-test.txt ha-011032-m03:/home/docker/cp-test_ha-011032-m04_ha-011032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 ssh -n ha-011032-m03 "sudo cat /home/docker/cp-test_ha-011032-m04_ha-011032-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 node stop m02 -v=7 --alsologtostderr: (11.079321998s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
E0920 18:18:20.799129  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:20.806509  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:20.818408  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:20.839903  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:20.881728  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:20.963205  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:21.132787  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: exit status 7 (837.939409ms)

                                                
                                                
-- stdout --
	ha-011032
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011032-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011032-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011032-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:18:20.458368  349990 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:20.458561  349990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:20.458593  349990 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:20.458615  349990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:20.458880  349990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:18:20.459096  349990 out.go:352] Setting JSON to false
	I0920 18:18:20.459163  349990 mustload.go:65] Loading cluster: ha-011032
	I0920 18:18:20.459290  349990 notify.go:220] Checking for updates...
	I0920 18:18:20.459625  349990 config.go:182] Loaded profile config "ha-011032": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:18:20.459650  349990 status.go:174] checking status of ha-011032 ...
	I0920 18:18:20.460212  349990 cli_runner.go:164] Run: docker container inspect ha-011032 --format={{.State.Status}}
	I0920 18:18:20.483123  349990 status.go:364] ha-011032 host status = "Running" (err=<nil>)
	I0920 18:18:20.483216  349990 host.go:66] Checking if "ha-011032" exists ...
	I0920 18:18:20.483621  349990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-011032
	I0920 18:18:20.502861  349990 host.go:66] Checking if "ha-011032" exists ...
	I0920 18:18:20.503354  349990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:18:20.504836  349990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-011032
	I0920 18:18:20.534845  349990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/ha-011032/id_rsa Username:docker}
	I0920 18:18:20.638377  349990 ssh_runner.go:195] Run: systemctl --version
	I0920 18:18:20.642902  349990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:18:20.656928  349990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:20.736935  349990 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 18:18:20.72615554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:18:20.737714  349990 kubeconfig.go:125] found "ha-011032" server: "https://192.168.49.254:8443"
	I0920 18:18:20.737751  349990 api_server.go:166] Checking apiserver status ...
	I0920 18:18:20.737804  349990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:20.750759  349990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2297/cgroup
	I0920 18:18:20.762217  349990 api_server.go:182] apiserver freezer: "7:freezer:/docker/5f438921b7cdb4f3f2e972857706025f7e1928561be1cde4a9a24c7251d7b0ef/kubepods/burstable/pod4cadd1799a5dd37d92a513211bc54d92/905bdaaea2bca7a9b67856849dd870f0c0e3637164dda1055df754dffcb98ec6"
	I0920 18:18:20.762302  349990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5f438921b7cdb4f3f2e972857706025f7e1928561be1cde4a9a24c7251d7b0ef/kubepods/burstable/pod4cadd1799a5dd37d92a513211bc54d92/905bdaaea2bca7a9b67856849dd870f0c0e3637164dda1055df754dffcb98ec6/freezer.state
	I0920 18:18:20.772134  349990 api_server.go:204] freezer state: "THAWED"
	I0920 18:18:20.772175  349990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:18:20.780402  349990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:18:20.780437  349990 status.go:456] ha-011032 apiserver status = Running (err=<nil>)
	I0920 18:18:20.780449  349990 status.go:176] ha-011032 status: &{Name:ha-011032 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:18:20.780468  349990 status.go:174] checking status of ha-011032-m02 ...
	I0920 18:18:20.780834  349990 cli_runner.go:164] Run: docker container inspect ha-011032-m02 --format={{.State.Status}}
	I0920 18:18:20.806180  349990 status.go:364] ha-011032-m02 host status = "Stopped" (err=<nil>)
	I0920 18:18:20.806423  349990 status.go:377] host is not running, skipping remaining checks
	I0920 18:18:20.806440  349990 status.go:176] ha-011032-m02 status: &{Name:ha-011032-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:18:20.806475  349990 status.go:174] checking status of ha-011032-m03 ...
	I0920 18:18:20.807005  349990 cli_runner.go:164] Run: docker container inspect ha-011032-m03 --format={{.State.Status}}
	I0920 18:18:20.826292  349990 status.go:364] ha-011032-m03 host status = "Running" (err=<nil>)
	I0920 18:18:20.826318  349990 host.go:66] Checking if "ha-011032-m03" exists ...
	I0920 18:18:20.826653  349990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-011032-m03
	I0920 18:18:20.845988  349990 host.go:66] Checking if "ha-011032-m03" exists ...
	I0920 18:18:20.846330  349990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:18:20.846378  349990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-011032-m03
	I0920 18:18:20.866400  349990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/ha-011032-m03/id_rsa Username:docker}
	I0920 18:18:20.968286  349990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:18:20.984024  349990 kubeconfig.go:125] found "ha-011032" server: "https://192.168.49.254:8443"
	I0920 18:18:20.984151  349990 api_server.go:166] Checking apiserver status ...
	I0920 18:18:20.984296  349990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:18:20.999400  349990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2218/cgroup
	I0920 18:18:21.015570  349990 api_server.go:182] apiserver freezer: "7:freezer:/docker/38e364b32c96904ed3826cbd4f1d036c9e6c66051ba06531933a2997501be964/kubepods/burstable/pod2e3dba23713f91cf3803f30d307f2e81/7be30fc173d6fdd16c77ecc2ad952f0aa9990e9662f1c150517f3169d8b937e3"
	I0920 18:18:21.015660  349990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/38e364b32c96904ed3826cbd4f1d036c9e6c66051ba06531933a2997501be964/kubepods/burstable/pod2e3dba23713f91cf3803f30d307f2e81/7be30fc173d6fdd16c77ecc2ad952f0aa9990e9662f1c150517f3169d8b937e3/freezer.state
	I0920 18:18:21.027641  349990 api_server.go:204] freezer state: "THAWED"
	I0920 18:18:21.027720  349990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:18:21.036541  349990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:18:21.036577  349990 status.go:456] ha-011032-m03 apiserver status = Running (err=<nil>)
	I0920 18:18:21.036588  349990 status.go:176] ha-011032-m03 status: &{Name:ha-011032-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:18:21.036607  349990 status.go:174] checking status of ha-011032-m04 ...
	I0920 18:18:21.037245  349990 cli_runner.go:164] Run: docker container inspect ha-011032-m04 --format={{.State.Status}}
	I0920 18:18:21.061568  349990 status.go:364] ha-011032-m04 host status = "Running" (err=<nil>)
	I0920 18:18:21.061595  349990 host.go:66] Checking if "ha-011032-m04" exists ...
	I0920 18:18:21.061989  349990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-011032-m04
	I0920 18:18:21.101360  349990 host.go:66] Checking if "ha-011032-m04" exists ...
	I0920 18:18:21.101707  349990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:18:21.101765  349990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-011032-m04
	I0920 18:18:21.123796  349990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/ha-011032-m04/id_rsa Username:docker}
	I0920 18:18:21.222208  349990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:18:21.236909  349990 status.go:176] ha-011032-m04 status: &{Name:ha-011032-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0920 18:18:21.456842  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (66.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 node start m02 -v=7 --alsologtostderr
E0920 18:18:22.098885  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:23.380806  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:25.942974  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:31.064954  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:41.307168  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:43.376438  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:19:01.788850  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:19:11.080232  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 node start m02 -v=7 --alsologtostderr: (1m5.600798077s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: (1.116606972s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (66.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.081833288s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-011032 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-011032 -v=7 --alsologtostderr
E0920 18:19:42.750953  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-011032 -v=7 --alsologtostderr: (34.709628372s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-011032 --wait=true -v=7 --alsologtostderr
E0920 18:21:04.673239  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:20.797444  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:43.376284  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-011032 --wait=true -v=7 --alsologtostderr: (3m39.958252746s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-011032
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 node delete m03 -v=7 --alsologtostderr
E0920 18:23:48.516885  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 node delete m03 -v=7 --alsologtostderr: (10.907430445s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 stop -v=7 --alsologtostderr: (32.909410104s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: exit status 7 (124.139981ms)

                                                
                                                
-- stdout --
	ha-011032
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011032-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011032-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:24:30.663355  377691 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:24:30.663576  377691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:24:30.663610  377691 out.go:358] Setting ErrFile to fd 2...
	I0920 18:24:30.663635  377691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:24:30.663915  377691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:24:30.664135  377691 out.go:352] Setting JSON to false
	I0920 18:24:30.664205  377691 mustload.go:65] Loading cluster: ha-011032
	I0920 18:24:30.664297  377691 notify.go:220] Checking for updates...
	I0920 18:24:30.665451  377691 config.go:182] Loaded profile config "ha-011032": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:24:30.665509  377691 status.go:174] checking status of ha-011032 ...
	I0920 18:24:30.666077  377691 cli_runner.go:164] Run: docker container inspect ha-011032 --format={{.State.Status}}
	I0920 18:24:30.687048  377691 status.go:364] ha-011032 host status = "Stopped" (err=<nil>)
	I0920 18:24:30.687068  377691 status.go:377] host is not running, skipping remaining checks
	I0920 18:24:30.687076  377691 status.go:176] ha-011032 status: &{Name:ha-011032 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:24:30.687109  377691 status.go:174] checking status of ha-011032-m02 ...
	I0920 18:24:30.687422  377691 cli_runner.go:164] Run: docker container inspect ha-011032-m02 --format={{.State.Status}}
	I0920 18:24:30.714989  377691 status.go:364] ha-011032-m02 host status = "Stopped" (err=<nil>)
	I0920 18:24:30.715008  377691 status.go:377] host is not running, skipping remaining checks
	I0920 18:24:30.715015  377691 status.go:176] ha-011032-m02 status: &{Name:ha-011032-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:24:30.715035  377691 status.go:174] checking status of ha-011032-m04 ...
	I0920 18:24:30.715363  377691 cli_runner.go:164] Run: docker container inspect ha-011032-m04 --format={{.State.Status}}
	I0920 18:24:30.731534  377691 status.go:364] ha-011032-m04 host status = "Stopped" (err=<nil>)
	I0920 18:24:30.731556  377691 status.go:377] host is not running, skipping remaining checks
	I0920 18:24:30.731564  377691 status.go:176] ha-011032-m04 status: &{Name:ha-011032-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (167.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-011032 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-011032 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m46.666953846s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (167.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-011032 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-011032 --control-plane -v=7 --alsologtostderr: (49.626030559s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-011032 status -v=7 --alsologtostderr: (1.110382943s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.1786759s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (35.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-662778 --driver=docker  --container-runtime=docker
E0920 18:28:20.797115  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:28:43.376750  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-662778 --driver=docker  --container-runtime=docker: (35.337436586s)
--- PASS: TestImageBuild/serial/Setup (35.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-662778
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-662778: (2.216260355s)
--- PASS: TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-662778
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-662778: (1.131421882s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-662778
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-662778: (1.112410432s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-662778
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-662778: (1.150856208s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-494514 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0920 18:30:06.443233  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-494514 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m18.864285571s)
--- PASS: TestJSONOutput/start/Command (78.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-494514 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-494514 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-494514 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-494514 --output=json --user=testUser: (5.837719723s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-410959 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-410959 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.535248ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"19148c6c-1126-45f6-8f7d-ccdc661c2f6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410959] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20663147-18ae-415e-8e97-4b7a73f2963e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"653e5e9d-71a6-40a8-a031-e36f4fb84b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10d47e57-104f-482b-8160-413db44ad8f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig"}}
	{"specversion":"1.0","id":"8b6242bb-9bd6-4808-859c-c7d681c34df9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube"}}
	{"specversion":"1.0","id":"040fedb1-e0cf-41cb-b8b1-c6b5829746ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b404330b-e3ff-4678-8116-3dbb0fc86d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2239c6ed-8ce2-4088-adbc-028fe011d1fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-410959
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-724285 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-724285 --network=: (38.367694512s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-724285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-724285
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-724285: (2.096926677s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-384708 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-384708 --network=bridge: (33.779257751s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-384708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-384708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-384708: (2.169519875s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.98s)

                                                
                                    
x
+
TestKicExistingNetwork (34.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 18:31:47.540621  282659 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 18:31:47.557758  282659 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 18:31:47.558618  282659 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 18:31:47.559522  282659 cli_runner.go:164] Run: docker network inspect existing-network
W0920 18:31:47.574520  282659 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 18:31:47.574548  282659 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 18:31:47.574569  282659 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 18:31:47.575466  282659 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 18:31:47.595879  282659 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f148e7dde9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ef:0d:30:2f} reservation:<nil>}
I0920 18:31:47.596799  282659 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001866670}
I0920 18:31:47.596839  282659 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 18:31:47.596898  282659 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 18:31:47.679764  282659 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-791361 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-791361 --network=existing-network: (32.149764566s)
helpers_test.go:175: Cleaning up "existing-network-791361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-791361
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-791361: (2.037793483s)
I0920 18:32:21.884678  282659 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.36s)

                                                
                                    
x
+
TestKicCustomSubnet (34.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-300034 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-300034 --subnet=192.168.60.0/24: (32.623506838s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-300034 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-300034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-300034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-300034: (2.208233342s)
--- PASS: TestKicCustomSubnet (34.87s)

                                                
                                    
x
+
TestKicStaticIP (38.36s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-969961 --static-ip=192.168.200.200
E0920 18:33:20.796891  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-969961 --static-ip=192.168.200.200: (35.927973604s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-969961 ip
helpers_test.go:175: Cleaning up "static-ip-969961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-969961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-969961: (2.269663979s)
--- PASS: TestKicStaticIP (38.36s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (74.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-970381 --driver=docker  --container-runtime=docker
E0920 18:33:43.376923  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-970381 --driver=docker  --container-runtime=docker: (31.878759097s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-973589 --driver=docker  --container-runtime=docker
E0920 18:34:43.878657  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-973589 --driver=docker  --container-runtime=docker: (37.020764157s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-970381
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-973589
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-973589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-973589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-973589: (2.150286988s)
helpers_test.go:175: Cleaning up "first-970381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-970381
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-970381: (2.229913102s)
--- PASS: TestMinikubeProfile (74.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-930703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-930703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.806237645s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-930703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-935257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-935257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.727011935s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-930703 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-930703 --alsologtostderr -v=5: (1.522045644s)
--- PASS: TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-935257
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-935257: (1.256699634s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-935257
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-935257: (7.907941242s)
--- PASS: TestMountStart/serial/RestartStopped (8.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-022930 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-022930 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.995686282s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-022930 -- rollout status deployment/busybox: (3.555188376s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:36:50.846407  282659 retry.go:31] will retry after 1.473521997s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:36:52.465904  282659 retry.go:31] will retry after 1.706648362s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:36:54.353012  282659 retry.go:31] will retry after 1.72688807s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:36:56.229120  282659 retry.go:31] will retry after 5.061633046s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:37:01.464569  282659 retry.go:31] will retry after 2.982538526s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:37:04.616382  282659 retry.go:31] will retry after 10.472705849s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:37:15.291840  282659 retry.go:31] will retry after 6.647234664s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-5nx4h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-xs66z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-5nx4h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-xs66z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-5nx4h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-xs66z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-5nx4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-5nx4h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-xs66z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-022930 -- exec busybox-7dff88458-xs66z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-022930 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-022930 -v 3 --alsologtostderr: (17.462141775s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-022930 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp testdata/cp-test.txt multinode-022930:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3941909782/001/cp-test_multinode-022930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930:/home/docker/cp-test.txt multinode-022930-m02:/home/docker/cp-test_multinode-022930_multinode-022930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test_multinode-022930_multinode-022930-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930:/home/docker/cp-test.txt multinode-022930-m03:/home/docker/cp-test_multinode-022930_multinode-022930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test_multinode-022930_multinode-022930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp testdata/cp-test.txt multinode-022930-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3941909782/001/cp-test_multinode-022930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m02:/home/docker/cp-test.txt multinode-022930:/home/docker/cp-test_multinode-022930-m02_multinode-022930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test_multinode-022930-m02_multinode-022930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m02:/home/docker/cp-test.txt multinode-022930-m03:/home/docker/cp-test_multinode-022930-m02_multinode-022930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test_multinode-022930-m02_multinode-022930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp testdata/cp-test.txt multinode-022930-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3941909782/001/cp-test_multinode-022930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m03:/home/docker/cp-test.txt multinode-022930:/home/docker/cp-test_multinode-022930-m03_multinode-022930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930 "sudo cat /home/docker/cp-test_multinode-022930-m03_multinode-022930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 cp multinode-022930-m03:/home/docker/cp-test.txt multinode-022930-m02:/home/docker/cp-test_multinode-022930-m03_multinode-022930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 ssh -n multinode-022930-m02 "sudo cat /home/docker/cp-test_multinode-022930-m03_multinode-022930-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-022930 node stop m03: (1.242490949s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-022930 status: exit status 7 (556.106247ms)

                                                
                                                
-- stdout --
	multinode-022930
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022930-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022930-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr: exit status 7 (547.037727ms)

                                                
                                                
-- stdout --
	multinode-022930
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022930-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022930-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:37:56.712052  454023 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:37:56.712249  454023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.712262  454023 out.go:358] Setting ErrFile to fd 2...
	I0920 18:37:56.712268  454023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:56.712552  454023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:37:56.712807  454023 out.go:352] Setting JSON to false
	I0920 18:37:56.712859  454023 mustload.go:65] Loading cluster: multinode-022930
	I0920 18:37:56.712954  454023 notify.go:220] Checking for updates...
	I0920 18:37:56.713341  454023 config.go:182] Loaded profile config "multinode-022930": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:37:56.713370  454023 status.go:174] checking status of multinode-022930 ...
	I0920 18:37:56.714039  454023 cli_runner.go:164] Run: docker container inspect multinode-022930 --format={{.State.Status}}
	I0920 18:37:56.734768  454023 status.go:364] multinode-022930 host status = "Running" (err=<nil>)
	I0920 18:37:56.734796  454023 host.go:66] Checking if "multinode-022930" exists ...
	I0920 18:37:56.735125  454023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022930
	I0920 18:37:56.752906  454023 host.go:66] Checking if "multinode-022930" exists ...
	I0920 18:37:56.753241  454023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:37:56.753295  454023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022930
	I0920 18:37:56.781887  454023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/multinode-022930/id_rsa Username:docker}
	I0920 18:37:56.890070  454023 ssh_runner.go:195] Run: systemctl --version
	I0920 18:37:56.894905  454023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:37:56.907849  454023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:37:56.967383  454023 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 18:37:56.955664837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:37:56.967981  454023 kubeconfig.go:125] found "multinode-022930" server: "https://192.168.67.2:8443"
	I0920 18:37:56.968032  454023 api_server.go:166] Checking apiserver status ...
	I0920 18:37:56.968091  454023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:37:56.980347  454023 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2299/cgroup
	I0920 18:37:56.989910  454023 api_server.go:182] apiserver freezer: "7:freezer:/docker/146f7e854640cd064e22f5f85efd36dd4028bbccdee268253bc74c042ee79884/kubepods/burstable/podfe71826fb01530c92b071829e8882043/14369fc5049b0a5d7f0e0220313da01a27269609dfc4cfef9cbf8e98f55b585a"
	I0920 18:37:56.989993  454023 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/146f7e854640cd064e22f5f85efd36dd4028bbccdee268253bc74c042ee79884/kubepods/burstable/podfe71826fb01530c92b071829e8882043/14369fc5049b0a5d7f0e0220313da01a27269609dfc4cfef9cbf8e98f55b585a/freezer.state
	I0920 18:37:56.999215  454023 api_server.go:204] freezer state: "THAWED"
	I0920 18:37:56.999245  454023 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 18:37:57.007562  454023 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 18:37:57.007669  454023 status.go:456] multinode-022930 apiserver status = Running (err=<nil>)
	I0920 18:37:57.007697  454023 status.go:176] multinode-022930 status: &{Name:multinode-022930 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:37:57.007742  454023 status.go:174] checking status of multinode-022930-m02 ...
	I0920 18:37:57.008149  454023 cli_runner.go:164] Run: docker container inspect multinode-022930-m02 --format={{.State.Status}}
	I0920 18:37:57.029007  454023 status.go:364] multinode-022930-m02 host status = "Running" (err=<nil>)
	I0920 18:37:57.029034  454023 host.go:66] Checking if "multinode-022930-m02" exists ...
	I0920 18:37:57.029352  454023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022930-m02
	I0920 18:37:57.047897  454023 host.go:66] Checking if "multinode-022930-m02" exists ...
	I0920 18:37:57.048229  454023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:37:57.048279  454023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022930-m02
	I0920 18:37:57.066090  454023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19679-277267/.minikube/machines/multinode-022930-m02/id_rsa Username:docker}
	I0920 18:37:57.166024  454023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:37:57.178134  454023 status.go:176] multinode-022930-m02 status: &{Name:multinode-022930-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:37:57.178178  454023 status.go:174] checking status of multinode-022930-m03 ...
	I0920 18:37:57.178499  454023 cli_runner.go:164] Run: docker container inspect multinode-022930-m03 --format={{.State.Status}}
	I0920 18:37:57.196387  454023 status.go:364] multinode-022930-m03 host status = "Stopped" (err=<nil>)
	I0920 18:37:57.196414  454023 status.go:377] host is not running, skipping remaining checks
	I0920 18:37:57.196422  454023 status.go:176] multinode-022930-m03 status: &{Name:multinode-022930-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-022930 node start m03 -v=7 --alsologtostderr: (10.696820278s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-022930
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-022930
E0920 18:38:20.798135  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-022930: (22.631427069s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-022930 --wait=true -v=8 --alsologtostderr
E0920 18:38:43.376584  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-022930 --wait=true -v=8 --alsologtostderr: (1m34.573932324s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-022930
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-022930 node delete m03: (5.285077357s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-022930 stop: (21.391522531s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-022930 status: exit status 7 (98.168944ms)

                                                
                                                
-- stdout --
	multinode-022930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022930-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr: exit status 7 (82.489172ms)

                                                
                                                
-- stdout --
	multinode-022930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022930-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:40:33.652604  467688 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:40:33.652763  467688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:40:33.652769  467688 out.go:358] Setting ErrFile to fd 2...
	I0920 18:40:33.652776  467688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:40:33.653012  467688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-277267/.minikube/bin
	I0920 18:40:33.653199  467688 out.go:352] Setting JSON to false
	I0920 18:40:33.653240  467688 mustload.go:65] Loading cluster: multinode-022930
	I0920 18:40:33.653343  467688 notify.go:220] Checking for updates...
	I0920 18:40:33.653739  467688 config.go:182] Loaded profile config "multinode-022930": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:40:33.653758  467688 status.go:174] checking status of multinode-022930 ...
	I0920 18:40:33.654606  467688 cli_runner.go:164] Run: docker container inspect multinode-022930 --format={{.State.Status}}
	I0920 18:40:33.672025  467688 status.go:364] multinode-022930 host status = "Stopped" (err=<nil>)
	I0920 18:40:33.672048  467688 status.go:377] host is not running, skipping remaining checks
	I0920 18:40:33.672056  467688 status.go:176] multinode-022930 status: &{Name:multinode-022930 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:40:33.672082  467688 status.go:174] checking status of multinode-022930-m02 ...
	I0920 18:40:33.672411  467688 cli_runner.go:164] Run: docker container inspect multinode-022930-m02 --format={{.State.Status}}
	I0920 18:40:33.690238  467688 status.go:364] multinode-022930-m02 host status = "Stopped" (err=<nil>)
	I0920 18:40:33.690263  467688 status.go:377] host is not running, skipping remaining checks
	I0920 18:40:33.690272  467688 status.go:176] multinode-022930-m02 status: &{Name:multinode-022930-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-022930 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-022930 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (51.673238531s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-022930 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-022930
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-022930-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-022930-m02 --driver=docker  --container-runtime=docker: exit status 14 (96.671731ms)

                                                
                                                
-- stdout --
	* [multinode-022930-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-022930-m02' is duplicated with machine name 'multinode-022930-m02' in profile 'multinode-022930'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-022930-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-022930-m03 --driver=docker  --container-runtime=docker: (33.981779138s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-022930
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-022930: exit status 80 (578.002854ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-022930 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-022930-m03 already exists in multinode-022930-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-022930-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-022930-m03: (2.185333805s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.91s)

                                                
                                    
x
+
TestPreload (115.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-393948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-393948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m9.262876036s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-393948 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-393948 image pull gcr.io/k8s-minikube/busybox: (2.446715455s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-393948
E0920 18:43:20.798472  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-393948: (10.883690284s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-393948 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0920 18:43:43.376900  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-393948 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (30.724031748s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-393948 image list
helpers_test.go:175: Cleaning up "test-preload-393948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-393948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-393948: (2.330132239s)
--- PASS: TestPreload (115.94s)

                                                
                                    
x
+
TestScheduledStopUnix (108.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-621279 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-621279 --memory=2048 --driver=docker  --container-runtime=docker: (35.390787212s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621279 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-621279 -n scheduled-stop-621279
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621279 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:44:39.062992  282659 retry.go:31] will retry after 63.157µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.063512  282659 retry.go:31] will retry after 219.314µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.064658  282659 retry.go:31] will retry after 301.942µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.066242  282659 retry.go:31] will retry after 438.559µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.067392  282659 retry.go:31] will retry after 338.78µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.068540  282659 retry.go:31] will retry after 867.394µs: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.069683  282659 retry.go:31] will retry after 1.558003ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.071925  282659 retry.go:31] will retry after 1.473784ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.074149  282659 retry.go:31] will retry after 3.314071ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.078375  282659 retry.go:31] will retry after 3.65293ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.083272  282659 retry.go:31] will retry after 3.749847ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.087524  282659 retry.go:31] will retry after 12.369806ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.100802  282659 retry.go:31] will retry after 16.608871ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.118135  282659 retry.go:31] will retry after 16.165396ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.134850  282659 retry.go:31] will retry after 39.151808ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
I0920 18:44:39.175121  282659 retry.go:31] will retry after 35.744544ms: open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/scheduled-stop-621279/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621279 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621279 -n scheduled-stop-621279
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621279
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621279 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621279
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-621279: exit status 7 (72.146485ms)

                                                
                                                
-- stdout --
	scheduled-stop-621279
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621279 -n scheduled-stop-621279
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621279 -n scheduled-stop-621279: exit status 7 (75.233583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-621279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-621279
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-621279: (1.673825869s)
--- PASS: TestScheduledStopUnix (108.76s)

                                                
                                    
x
+
TestSkaffold (129.41s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3083058631 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-571035 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-571035 --memory=2600 --driver=docker  --container-runtime=docker: (36.727804102s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3083058631 run --minikube-profile skaffold-571035 --kube-context skaffold-571035 --status-check=true --port-forward=false --interactive=false
E0920 18:46:46.444804  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3083058631 run --minikube-profile skaffold-571035 --kube-context skaffold-571035 --status-check=true --port-forward=false --interactive=false: (1m16.239030658s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-54676656d4-8tfpb" [714bba6e-256b-49ad-b0c3-d41556010e42] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004441942s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5c78b6578b-7lgbk" [c2eede57-b6a4-4fc5-8a54-9335540ce2f5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004149033s
helpers_test.go:175: Cleaning up "skaffold-571035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-571035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-571035: (3.374086479s)
--- PASS: TestSkaffold (129.41s)

                                                
                                    
x
+
TestInsufficientStorage (12.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-202639 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-202639 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.170874307s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e682eef4-cc62-4e0e-8336-ad5552329ada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-202639] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"098895d9-eff8-46f5-afa5-45241b376793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"f8b0c175-da72-40e8-a460-29c04c0efc96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bb164e3d-b414-4777-bbaa-8d08ba817fd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig"}}
	{"specversion":"1.0","id":"1a3f4d24-1b56-49ef-8b5a-21196d2ee077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube"}}
	{"specversion":"1.0","id":"7c761a25-d613-414f-8013-4ef555c070a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c9d02fde-01c8-4ad4-9f2c-ea0f0ee2b1e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"60fc8d5e-2e8b-46fa-9bc5-1399fc8d6bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f4157689-e3fb-4a1b-bff6-f0803e4a1bf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2f07fe8d-241e-43ef-8763-20f68a85cd5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6eee8710-3a32-468d-989b-fdd48e6620fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e39cfacc-23f9-4fb1-94d4-2fd157edb14d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-202639\" primary control-plane node in \"insufficient-storage-202639\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ad963c1-9c47-40ee-be21-33900ce70731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a91259c7-f2fd-4e18-aaaa-97e53e090435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8ab6e14-8ad4-4de1-83cc-e78cc67a5807","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-202639 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-202639 --output=json --layout=cluster: exit status 7 (316.071151ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-202639","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-202639","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:48:11.762087  502042 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-202639" does not appear in /home/jenkins/minikube-integration/19679-277267/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-202639 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-202639 --output=json --layout=cluster: exit status 7 (320.750001ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-202639","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-202639","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:48:12.080603  502104 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-202639" does not appear in /home/jenkins/minikube-integration/19679-277267/kubeconfig
	E0920 18:48:12.093810  502104 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/insufficient-storage-202639/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-202639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-202639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-202639: (1.714786969s)
--- PASS: TestInsufficientStorage (12.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2540912502 start -p running-upgrade-328793 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2540912502 start -p running-upgrade-328793 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.882899167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-328793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-328793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m11.982003309s)
helpers_test.go:175: Cleaning up "running-upgrade-328793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-328793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-328793: (2.236300676s)
--- PASS: TestRunningBinaryUpgrade (126.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (140.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.838497872s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-508362
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-508362: (11.109022843s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-508362 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-508362 status --format={{.Host}}: exit status 7 (121.189654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.343362746s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-508362 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (119.523758ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-508362] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-508362
	    minikube start -p kubernetes-upgrade-508362 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5083622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-508362 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-508362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.448089232s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-508362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-508362
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-508362: (2.571683705s)
--- PASS: TestKubernetesUpgrade (140.71s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2637715471 start -p missing-upgrade-203458 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2637715471 start -p missing-upgrade-203458 --memory=2200 --driver=docker  --container-runtime=docker: (48.395760796s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-203458
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-203458: (10.450741712s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-203458
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-203458 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 18:55:30.751594  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-203458 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m4.42614112s)
helpers_test.go:175: Cleaning up "missing-upgrade-203458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-203458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-203458: (2.467057871s)
--- PASS: TestMissingContainerUpgrade (126.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (86.675194ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-172998] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-277267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-277267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-172998 --driver=docker  --container-runtime=docker
E0920 18:48:20.797168  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:43.376519  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-172998 --driver=docker  --container-runtime=docker: (49.297584196s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-172998 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --driver=docker  --container-runtime=docker: (17.667519501s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-172998 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-172998 status -o json: exit status 2 (329.149826ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-172998","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-172998
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-172998: (1.7751199s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-172998 --no-kubernetes --driver=docker  --container-runtime=docker: (10.619277965s)
--- PASS: TestNoKubernetes/serial/Start (10.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-172998 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-172998 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.859834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-172998
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-172998: (1.222165157s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-172998 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-172998 --driver=docker  --container-runtime=docker: (8.241484026s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-172998 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-172998 "sudo systemctl is-active --quiet service kubelet": exit status 1 (578.206426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2020808986 start -p stopped-upgrade-154158 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0920 18:52:46.889393  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:46.896365  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:46.907744  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:46.929258  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:46.970548  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:47.051989  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:47.213545  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:47.535222  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:48.177261  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:49.459818  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:52.021974  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:57.143282  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:53:07.385346  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:53:20.797630  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2020808986 start -p stopped-upgrade-154158 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m27.184665595s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2020808986 -p stopped-upgrade-154158 stop
E0920 18:53:27.866678  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2020808986 -p stopped-upgrade-154158 stop: (10.988116435s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-154158 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 18:53:43.376286  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-154158 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.975397311s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-154158
E0920 18:54:08.829273  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-154158: (1.401421166s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestPause/serial/Start (85.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-405993 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0920 18:57:46.888865  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:14.593081  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:20.796880  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-405993 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m25.973315013s)
--- PASS: TestPause/serial/Start (85.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0920 18:58:43.376665  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (52.501547157s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-405993 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-405993 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.511076157s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-199368 "pgrep -a kubelet"
I0920 18:59:17.666332  282659 config.go:182] Loaded profile config "auto-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l42jd" [2c4f512b-baee-4fc4-ae23-f1688b9734b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l42jd" [2c4f512b-baee-4fc4-ae23-f1688b9734b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004765798s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-405993 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-405993 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-405993 --output=json --layout=cluster: exit status 2 (366.91012ms)

                                                
                                                
-- stdout --
	{"Name":"pause-405993","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-405993","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-405993 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-405993 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-405993 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-405993 --alsologtostderr -v=5: (2.121534005s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-405993
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-405993: exit status 1 (14.464532ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-405993: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m15.498431789s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m25.68146543s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j227t" [96dea0f2-e0dc-4417-80c4-8b3aac2afec1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004561179s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-199368 "pgrep -a kubelet"
I0920 19:00:49.385108  282659 config.go:182] Loaded profile config "kindnet-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-199368 replace --force -f testdata/netcat-deployment.yaml
I0920 19:00:49.736580  282659 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-477c6" [39d88f9d-7bab-4bd0-8eae-f6afd93f871f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-477c6" [39d88f9d-7bab-4bd0-8eae-f6afd93f871f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003661509s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p5zqd" [5151b31c-03ac-46c0-a233-02fefbd7a398] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005372813s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.729575411s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-199368 "pgrep -a kubelet"
I0920 19:01:27.280837  282659 config.go:182] Loaded profile config "calico-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vnmfd" [16cd0348-dad5-4d1a-a6bf-240c21c97db0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vnmfd" [16cd0348-dad5-4d1a-a6bf-240c21c97db0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004657178s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m20.629191857s)
--- PASS: TestNetworkPlugins/group/false/Start (80.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-199368 "pgrep -a kubelet"
I0920 19:02:31.615661  282659 config.go:182] Loaded profile config "custom-flannel-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fz6cd" [8a44bc89-9c82-4e43-a55a-635cffa0b24b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fz6cd" [8a44bc89-9c82-4e43-a55a-635cffa0b24b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005575523s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0920 19:03:20.797487  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:26.447011  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (48.648839022s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-199368 "pgrep -a kubelet"
I0920 19:03:30.856101  282659 config.go:182] Loaded profile config "false-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-199368 replace --force -f testdata/netcat-deployment.yaml
I0920 19:03:31.247278  282659 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tq5bb" [ca6f3516-a36d-40f8-a036-25f69c2f3231] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tq5bb" [ca6f3516-a36d-40f8-a036-25f69c2f3231] Running
E0920 19:03:43.376673  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.004101905s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-199368 "pgrep -a kubelet"
I0920 19:03:56.019126  282659 config.go:182] Loaded profile config "enable-default-cni-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6kqvd" [c06918d2-0a8a-4849-bca3-80fb06a06003] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6kqvd" [c06918d2-0a8a-4849-bca3-80fb06a06003] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003983434s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m2.104383654s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0920 19:04:38.443966  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:58.925888  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m21.436142859s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7bwt8" [67123825-bf12-4239-ac9d-cc3237647256] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004776439s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-199368 "pgrep -a kubelet"
I0920 19:05:16.433597  282659 config.go:182] Loaded profile config "flannel-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gt6dn" [a88051a4-3a67-4a84-a709-5b9ca762a472] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gt6dn" [a88051a4-3a67-4a84-a709-5b9ca762a472] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00450731s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0920 19:05:53.162406  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-199368 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m27.522166706s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-199368 "pgrep -a kubelet"
I0920 19:05:55.636556  282659 config.go:182] Loaded profile config "bridge-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6xvsz" [b9a0602b-d99f-438a-a39f-00f79c7a3598] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6xvsz" [b9a0602b-d99f-438a-a39f-00f79c7a3598] Running
E0920 19:06:03.405149  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004486962s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (180.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-096971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 19:06:41.383118  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:01.809965  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:01.865445  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:04.848029  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-096971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (3m0.499669853s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (180.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-199368 "pgrep -a kubelet"
I0920 19:07:18.095523  282659 config.go:182] Loaded profile config "kubenet-199368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-199368 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2vtq8" [66c98305-7b55-4fb0-9bb0-35dd926607c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2vtq8" [66c98305-7b55-4fb0-9bb0-35dd926607c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004202271s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-199368 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-199368 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-928769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:08:03.882275  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:13.003121  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:20.797652  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:26.770066  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.207520  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.213902  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.225242  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.246542  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.287904  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.369317  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.530748  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:31.852954  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:32.495063  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:33.776375  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:36.338667  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:41.460163  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:43.376782  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:51.701776  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:53.965176  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.385177  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.391632  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.403038  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.424437  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.465964  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.547527  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:56.709212  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:57.030571  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:57.672581  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:58.953982  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:01.515432  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:04.753909  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:06.637308  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:09.954568  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:12.183349  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:16.878806  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:17.941520  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-928769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m23.853856601s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-928769 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a87e92f-c4be-441f-98ba-0d39a48e229f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a87e92f-c4be-441f-98ba-0d39a48e229f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005105401s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-928769 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-928769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-928769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064393964s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-928769 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-928769 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-928769 --alsologtostderr -v=3: (11.164913585s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-096971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [746fa0b8-7131-4352-917d-176ee76bd648] Pending
helpers_test.go:344: "busybox" [746fa0b8-7131-4352-917d-176ee76bd648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [746fa0b8-7131-4352-917d-176ee76bd648] Running
E0920 19:09:37.360314  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003845343s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-096971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-096971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-096971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.44501007s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-096971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-928769 -n no-preload-928769
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-928769 -n no-preload-928769: exit status 7 (119.235438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-928769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (290.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-928769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-928769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m50.511096797s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-928769 -n no-preload-928769
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (290.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-096971 --alsologtostderr -v=3
E0920 19:09:45.652069  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:53.145417  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-096971 --alsologtostderr -v=3: (11.585625586s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096971 -n old-k8s-version-096971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096971 -n old-k8s-version-096971: exit status 7 (112.997094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-096971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-096971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 19:10:10.081962  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.093928  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.105307  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.126694  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.168056  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.249350  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.410904  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:10.733042  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:11.374819  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:12.656624  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:15.217947  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:15.886701  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:18.322334  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:20.339258  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:30.580597  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:42.901691  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:51.062842  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.069803  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.076311  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.087683  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.109376  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.150972  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.233175  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.394892  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:56.716596  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:57.358757  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:58.640048  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:01.202252  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:06.324414  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:10.611862  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:15.067422  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:16.566481  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:20.884957  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:32.025182  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:37.048449  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:40.244276  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:48.596231  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.016192  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.505197  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.511726  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.523202  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.544792  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.586197  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.667941  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:18.829359  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:19.151118  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:19.793430  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:21.074959  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:23.637657  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:28.759069  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:32.021828  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:39.000463  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:46.888874  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:53.946728  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:59.481867  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:59.728530  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:20.797212  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:31.207893  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:39.938618  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:40.443849  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:43.376604  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:56.384883  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:58.909264  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:17.941948  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.086521  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-096971 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (6m16.538868302s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096971 -n old-k8s-version-096971
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (376.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k5x2v" [d376a667-b620-488d-8c89-57370e343d0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003389447s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k5x2v" [d376a667-b620-488d-8c89-57370e343d0c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003892198s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-928769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-928769 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-928769 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-928769 -n no-preload-928769
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-928769 -n no-preload-928769: exit status 2 (367.389492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-928769 -n no-preload-928769
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-928769 -n no-preload-928769: exit status 2 (499.041458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-928769 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-928769 -n no-preload-928769
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-928769 -n no-preload-928769
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-253670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:15:02.365199  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:10.081950  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.788596  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:42.901610  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:56.069915  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-253670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m14.556435214s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-253670 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e500259e-6539-40e9-a31b-64ff5fb4e293] Pending
helpers_test.go:344: "busybox" [e500259e-6539-40e9-a31b-64ff5fb4e293] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e500259e-6539-40e9-a31b-64ff5fb4e293] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004306502s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-253670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-74j8f" [0ccd4f20-f1dc-4a68-9e75-b03a9116bcd2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003818434s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-253670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-253670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.122747232s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-253670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-253670 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-253670 --alsologtostderr -v=3: (11.336821169s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-74j8f" [0ccd4f20-f1dc-4a68-9e75-b03a9116bcd2] Running
E0920 19:16:20.885373  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003989289s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-096971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-096971 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-096971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096971 -n old-k8s-version-096971
E0920 19:16:23.780938  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096971 -n old-k8s-version-096971: exit status 2 (359.440019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096971 -n old-k8s-version-096971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096971 -n old-k8s-version-096971: exit status 2 (346.734265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-096971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096971 -n old-k8s-version-096971
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096971 -n old-k8s-version-096971
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-253670 -n embed-certs-253670
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-253670 -n embed-certs-253670: exit status 7 (129.087692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-253670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (293.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-253670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-253670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m53.10555063s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-253670 -n embed-certs-253670
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (293.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-378223 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:17:18.504754  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:32.022059  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:46.207815  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:46.888898  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/skaffold-571035/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-378223 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m26.233633457s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-378223 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [232551b2-4dfe-4a91-9fb1-f3033d061954] Pending
helpers_test.go:344: "busybox" [232551b2-4dfe-4a91-9fb1-f3033d061954] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [232551b2-4dfe-4a91-9fb1-f3033d061954] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004504104s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-378223 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-378223 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-378223 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065761823s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-378223 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-378223 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-378223 --alsologtostderr -v=3: (10.949434863s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223: exit status 7 (72.009617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-378223 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-378223 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:18:20.797903  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:31.207671  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/false-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:43.376894  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:56.384918  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/enable-default-cni-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:17.941982  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.650911  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.657334  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.668835  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.690329  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.731803  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.813322  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:18.975278  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:19.297002  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:19.938368  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:21.220075  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:23.781872  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:28.904151  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.532417  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.538827  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.550322  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.571758  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.613266  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.694907  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:32.856672  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:33.179024  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:33.821168  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:35.102630  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:37.664548  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:39.146378  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:42.786583  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:53.028101  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:59.628737  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:06.448983  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/addons-850577/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:10.082666  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:13.509650  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:40.590913  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:41.013998  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/auto-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:42.900974  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:54.471468  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:56.069166  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/bridge-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-378223 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m59.468207099s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b24mp" [0d3254d6-8ca3-47ba-9e08-b81d37c6b12a] Running
E0920 19:21:20.885472  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004178356s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b24mp" [0d3254d6-8ca3-47ba-9e08-b81d37c6b12a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003938354s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-253670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-253670 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-253670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-253670 -n embed-certs-253670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-253670 -n embed-certs-253670: exit status 2 (328.803518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-253670 -n embed-certs-253670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-253670 -n embed-certs-253670: exit status 2 (342.118378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-253670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-253670 -n embed-certs-253670
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-253670 -n embed-certs-253670
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-639724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:22:02.513575  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/no-preload-928769/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:22:05.973940  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kindnet-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-639724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (38.173934696s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-639724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 19:22:16.393366  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/old-k8s-version-096971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-639724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.148441695s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-639724 --alsologtostderr -v=3
E0920 19:22:18.505350  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/kubenet-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-639724 --alsologtostderr -v=3: (5.904436291s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-639724 -n newest-cni-639724
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-639724 -n newest-cni-639724: exit status 7 (88.254574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-639724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-639724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 19:22:32.021869  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/custom-flannel-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-639724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (18.746214295s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-639724 -n newest-cni-639724
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-639724 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-639724 --alsologtostderr -v=1
E0920 19:22:43.958106  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/calico-199368/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-639724 --alsologtostderr -v=1: (1.268882464s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-639724 -n newest-cni-639724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-639724 -n newest-cni-639724: exit status 2 (360.673144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-639724 -n newest-cni-639724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-639724 -n newest-cni-639724: exit status 2 (353.859078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-639724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-639724 -n newest-cni-639724
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-639724 -n newest-cni-639724
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lrq7w" [f42993ee-0556-47f2-b9a6-22484f6fc4b7] Running
E0920 19:23:20.797467  282659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-277267/.minikube/profiles/functional-163144/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003754246s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lrq7w" [f42993ee-0556-47f2-b9a6-22484f6fc4b7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003763595s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-378223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-378223 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-378223 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223: exit status 2 (333.660064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223: exit status 2 (330.880594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-378223 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-378223 -n default-k8s-diff-port-378223
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.49s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-115951 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-115951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-115951
--- SKIP: TestDownloadOnlyKic (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-199368 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-199368" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-199368

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-199368" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199368"

                                                
                                                
----------------------- debugLogs end: cilium-199368 [took: 5.61637544s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-199368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-199368
--- SKIP: TestNetworkPlugins/group/cilium (5.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-894783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-894783
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard