Test Report: Docker_Linux_docker_arm64 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 75.56
x
+
TestAddons/parallel/Registry (75.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.976834ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fd6l9" [22ee96cc-eafb-42a6-9f00-4d14a9bbfa5a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.010630646s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8mf99" [7567e96d-94ff-4199-aa1d-8f7b62234e4d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004261846s
addons_test.go:342: (dbg) Run:  kubectl --context addons-810228 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-810228 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-810228 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.113399082s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-810228 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 ip
2024/09/19 18:52:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-810228
helpers_test.go:235: (dbg) docker inspect addons-810228:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067",
	        "Created": "2024-09-19T18:39:40.944129893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739284,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:41.082004655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067/hostname",
	        "HostsPath": "/var/lib/docker/containers/c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067/hosts",
	        "LogPath": "/var/lib/docker/containers/c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067/c018a4786ebd36725ca299d08412c4a94d35b731e92f9f4966208e7b3da73067-json.log",
	        "Name": "/addons-810228",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-810228:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-810228",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/efc04905829fe2b7a9c393935b87e79b63b22d1c92b19206454465fa8ab9b3fb-init/diff:/var/lib/docker/overlay2/f96c25cbe84b7425e40f60239fa13d2111bbc164bcf24ae221a7a470db6b8798/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efc04905829fe2b7a9c393935b87e79b63b22d1c92b19206454465fa8ab9b3fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efc04905829fe2b7a9c393935b87e79b63b22d1c92b19206454465fa8ab9b3fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efc04905829fe2b7a9c393935b87e79b63b22d1c92b19206454465fa8ab9b3fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-810228",
	                "Source": "/var/lib/docker/volumes/addons-810228/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-810228",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-810228",
	                "name.minikube.sigs.k8s.io": "addons-810228",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fbe969cc3c5ee99e599f5f0d7cdfc343ec4c2c3cb9445c827cfdf54a13302331",
	            "SandboxKey": "/var/run/docker/netns/fbe969cc3c5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-810228": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f7f5e4cac6faa1e5a698967e456fb7ee2ad68221578d3d831123767c4a8907f4",
	                    "EndpointID": "8c147caff28c9170e7ac838532908e7794a142b734bda70a17511b5cf12c910a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-810228",
	                        "c018a4786ebd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-810228 -n addons-810228
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 logs -n 25: (1.27813484s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-266640   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-266640                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-266640                                                                     | download-only-266640   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only                                                                     | download-only-736260   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-736260                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-736260                                                                     | download-only-736260   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-266640                                                                     | download-only-266640   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-736260                                                                     | download-only-736260   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-690855 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | download-docker-690855                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-690855                                                                   | download-docker-690855 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-490984   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | binary-mirror-490984                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42445                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-490984                                                                     | binary-mirror-490984   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| addons  | enable dashboard -p                                                                         | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-810228                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-810228                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-810228 --wait=true                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-810228 addons disable                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:43 UTC | 19 Sep 24 18:43 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-810228 addons disable                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-810228 addons                                                                        | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-810228 addons                                                                        | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | -p addons-810228                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-810228 ssh cat                                                                       | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | /opt/local-path-provisioner/pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-810228 addons disable                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-810228 ip                                                                            | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-810228 addons disable                                                                | addons-810228          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:17.338184  738797 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:17.338511  738797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:17.338525  738797 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:17.338531  738797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:17.338819  738797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 18:39:17.339322  738797 out.go:352] Setting JSON to false
	I0919 18:39:17.340240  738797 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12098,"bootTime":1726759060,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0919 18:39:17.340326  738797 start.go:139] virtualization:  
	I0919 18:39:17.342875  738797 out.go:177] * [addons-810228] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:39:17.344808  738797 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:39:17.344883  738797 notify.go:220] Checking for updates...
	I0919 18:39:17.348643  738797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:17.350445  738797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:39:17.352455  738797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	I0919 18:39:17.354118  738797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:39:17.355699  738797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:39:17.357689  738797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:17.381558  738797 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:17.381681  738797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:17.442298  738797 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-19 18:39:17.432401308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:17.442407  738797 docker.go:318] overlay module found
	I0919 18:39:17.444501  738797 out.go:177] * Using the docker driver based on user configuration
	I0919 18:39:17.446291  738797 start.go:297] selected driver: docker
	I0919 18:39:17.446311  738797 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:17.446325  738797 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:39:17.446961  738797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:17.503274  738797 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-19 18:39:17.49350899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:17.503503  738797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:17.503737  738797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:39:17.505538  738797 out.go:177] * Using Docker driver with root privileges
	I0919 18:39:17.507305  738797 cni.go:84] Creating CNI manager for ""
	I0919 18:39:17.507383  738797 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:39:17.507402  738797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:17.507487  738797 start.go:340] cluster config:
	{Name:addons-810228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-810228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:17.509313  738797 out.go:177] * Starting "addons-810228" primary control-plane node in "addons-810228" cluster
	I0919 18:39:17.511097  738797 cache.go:121] Beginning downloading kic base image for docker with docker
	I0919 18:39:17.512781  738797 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:17.514503  738797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:17.514554  738797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 18:39:17.514562  738797 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:17.514657  738797 preload.go:172] Found /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 18:39:17.514673  738797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 18:39:17.515039  738797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/config.json ...
	I0919 18:39:17.515068  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/config.json: {Name:mkee134a451e9b9a1d68445992a80a62382c1584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:17.515248  738797 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:17.530449  738797 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:17.530567  738797 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:17.530585  738797 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:17.530590  738797 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:17.530597  738797 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:17.530602  738797 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:34.430433  738797 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:34.430483  738797 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:34.430521  738797 start.go:360] acquireMachinesLock for addons-810228: {Name:mkf32b04ff477728ed0dff3e81b55c61a4b83769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:34.430653  738797 start.go:364] duration metric: took 109.029µs to acquireMachinesLock for "addons-810228"
	I0919 18:39:34.430685  738797 start.go:93] Provisioning new machine with config: &{Name:addons-810228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-810228 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:39:34.430767  738797 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:34.433180  738797 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:34.433427  738797 start.go:159] libmachine.API.Create for "addons-810228" (driver="docker")
	I0919 18:39:34.433464  738797 client.go:168] LocalClient.Create starting
	I0919 18:39:34.433581  738797 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem
	I0919 18:39:34.878949  738797 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/cert.pem
	I0919 18:39:35.050664  738797 cli_runner.go:164] Run: docker network inspect addons-810228 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:35.069190  738797 cli_runner.go:211] docker network inspect addons-810228 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:35.069282  738797 network_create.go:284] running [docker network inspect addons-810228] to gather additional debugging logs...
	I0919 18:39:35.069305  738797 cli_runner.go:164] Run: docker network inspect addons-810228
	W0919 18:39:35.085954  738797 cli_runner.go:211] docker network inspect addons-810228 returned with exit code 1
	I0919 18:39:35.085993  738797 network_create.go:287] error running [docker network inspect addons-810228]: docker network inspect addons-810228: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-810228 not found
	I0919 18:39:35.086010  738797 network_create.go:289] output of [docker network inspect addons-810228]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-810228 not found
	
	** /stderr **
	I0919 18:39:35.086128  738797 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:35.102397  738797 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ba0280}
	I0919 18:39:35.102455  738797 network_create.go:124] attempt to create docker network addons-810228 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:35.102531  738797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-810228 addons-810228
	I0919 18:39:35.183124  738797 network_create.go:108] docker network addons-810228 192.168.49.0/24 created
	I0919 18:39:35.183163  738797 kic.go:121] calculated static IP "192.168.49.2" for the "addons-810228" container
	I0919 18:39:35.183310  738797 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:35.197817  738797 cli_runner.go:164] Run: docker volume create addons-810228 --label name.minikube.sigs.k8s.io=addons-810228 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:35.215582  738797 oci.go:103] Successfully created a docker volume addons-810228
	I0919 18:39:35.215687  738797 cli_runner.go:164] Run: docker run --rm --name addons-810228-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-810228 --entrypoint /usr/bin/test -v addons-810228:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:37.165416  738797 cli_runner.go:217] Completed: docker run --rm --name addons-810228-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-810228 --entrypoint /usr/bin/test -v addons-810228:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.949663438s)
	I0919 18:39:37.165447  738797 oci.go:107] Successfully prepared a docker volume addons-810228
	I0919 18:39:37.165479  738797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:37.165498  738797 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:37.165572  738797 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-810228:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:40.879228  738797 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-810228:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.713577163s)
	I0919 18:39:40.879261  738797 kic.go:203] duration metric: took 3.713759373s to extract preloaded images to volume ...
	W0919 18:39:40.879432  738797 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:40.879589  738797 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:40.930042  738797 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-810228 --name addons-810228 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-810228 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-810228 --network addons-810228 --ip 192.168.49.2 --volume addons-810228:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:41.245014  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Running}}
	I0919 18:39:41.280573  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:39:41.303256  738797 cli_runner.go:164] Run: docker exec addons-810228 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:41.361460  738797 oci.go:144] the created container "addons-810228" has a running status.
	I0919 18:39:41.361489  738797 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa...
	I0919 18:39:42.161867  738797 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:42.191067  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:39:42.215786  738797 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:42.215810  738797 kic_runner.go:114] Args: [docker exec --privileged addons-810228 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:42.284767  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:39:42.309219  738797 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:42.309351  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:42.337139  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:42.337430  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:42.337442  738797 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:42.492304  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-810228
	
	I0919 18:39:42.492330  738797 ubuntu.go:169] provisioning hostname "addons-810228"
	I0919 18:39:42.492427  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:42.509352  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:42.509607  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:42.509626  738797 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-810228 && echo "addons-810228" | sudo tee /etc/hostname
	I0919 18:39:42.666512  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-810228
	
	I0919 18:39:42.666603  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:42.683254  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:42.683492  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:42.683510  738797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-810228' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-810228/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-810228' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:42.827083  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:42.827114  738797 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-732615/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-732615/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-732615/.minikube}
	I0919 18:39:42.827134  738797 ubuntu.go:177] setting up certificates
	I0919 18:39:42.827143  738797 provision.go:84] configureAuth start
	I0919 18:39:42.827237  738797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-810228
	I0919 18:39:42.843835  738797 provision.go:143] copyHostCerts
	I0919 18:39:42.843917  738797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-732615/.minikube/ca.pem (1082 bytes)
	I0919 18:39:42.844042  738797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-732615/.minikube/cert.pem (1123 bytes)
	I0919 18:39:42.844105  738797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-732615/.minikube/key.pem (1675 bytes)
	I0919 18:39:42.844164  738797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-732615/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca-key.pem org=jenkins.addons-810228 san=[127.0.0.1 192.168.49.2 addons-810228 localhost minikube]
	I0919 18:39:43.591880  738797 provision.go:177] copyRemoteCerts
	I0919 18:39:43.591956  738797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:43.592000  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:43.607632  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:39:43.707614  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:39:43.733901  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:43.757993  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:43.781110  738797 provision.go:87] duration metric: took 953.952886ms to configureAuth
	I0919 18:39:43.781134  738797 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:43.781329  738797 config.go:182] Loaded profile config "addons-810228": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:39:43.781386  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:43.797704  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:43.797958  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:43.797977  738797 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 18:39:43.943549  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 18:39:43.943570  738797 ubuntu.go:71] root file system type: overlay
	I0919 18:39:43.943686  738797 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 18:39:43.943756  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:43.960750  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:43.961000  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:43.961091  738797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 18:39:44.123955  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 18:39:44.124046  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:44.141680  738797 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:44.141933  738797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0919 18:39:44.141962  738797 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 18:39:44.908512  738797 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-19 18:39:44.117254782 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 18:39:44.908544  738797 machine.go:96] duration metric: took 2.599303656s to provisionDockerMachine
	I0919 18:39:44.908556  738797 client.go:171] duration metric: took 10.475081837s to LocalClient.Create
	I0919 18:39:44.908569  738797 start.go:167] duration metric: took 10.475143154s to libmachine.API.Create "addons-810228"
	I0919 18:39:44.908575  738797 start.go:293] postStartSetup for "addons-810228" (driver="docker")
	I0919 18:39:44.908586  738797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:44.908660  738797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:44.908705  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:44.926892  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:39:45.050576  738797 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:45.067051  738797 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:45.067090  738797 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:45.067102  738797 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:45.067110  738797 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:45.067121  738797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-732615/.minikube/addons for local assets ...
	I0919 18:39:45.067226  738797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-732615/.minikube/files for local assets ...
	I0919 18:39:45.067264  738797 start.go:296] duration metric: took 158.675832ms for postStartSetup
	I0919 18:39:45.067633  738797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-810228
	I0919 18:39:45.109619  738797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/config.json ...
	I0919 18:39:45.110070  738797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:45.110143  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:45.163769  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:39:45.271929  738797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:45.277938  738797 start.go:128] duration metric: took 10.847153173s to createHost
	I0919 18:39:45.277967  738797 start.go:83] releasing machines lock for "addons-810228", held for 10.847300077s
	I0919 18:39:45.278108  738797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-810228
	I0919 18:39:45.300637  738797 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:45.300702  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:45.300996  738797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:45.301063  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:39:45.323758  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:39:45.330980  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:39:45.423131  738797 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:45.555074  738797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:45.559505  738797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 18:39:45.586892  738797 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:45.586979  738797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:45.615870  738797 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:45.615900  738797 start.go:495] detecting cgroup driver to use...
	I0919 18:39:45.615934  738797 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:45.616040  738797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:45.632451  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0919 18:39:45.642846  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 18:39:45.652926  738797 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 18:39:45.653042  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 18:39:45.663060  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:39:45.673960  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 18:39:45.683854  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:39:45.694802  738797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:45.704637  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 18:39:45.714368  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 18:39:45.727999  738797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 18:39:45.737956  738797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:45.746461  738797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:45.755297  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:45.843333  738797 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 18:39:45.938684  738797 start.go:495] detecting cgroup driver to use...
	I0919 18:39:45.938733  738797 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:45.938783  738797 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 18:39:45.952336  738797 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0919 18:39:45.952405  738797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 18:39:45.966701  738797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:45.984443  738797 ssh_runner.go:195] Run: which cri-dockerd
	I0919 18:39:45.989289  738797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 18:39:46.000169  738797 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0919 18:39:46.023553  738797 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 18:39:46.126864  738797 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 18:39:46.230649  738797 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 18:39:46.230854  738797 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 18:39:46.249742  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:46.348341  738797 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 18:39:46.624267  738797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 18:39:46.636779  738797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:39:46.649037  738797 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 18:39:46.741782  738797 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 18:39:46.833874  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:46.924571  738797 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 18:39:46.938473  738797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:39:46.949828  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:47.046208  738797 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 18:39:47.119544  738797 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 18:39:47.119634  738797 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 18:39:47.123844  738797 start.go:563] Will wait 60s for crictl version
	I0919 18:39:47.123911  738797 ssh_runner.go:195] Run: which crictl
	I0919 18:39:47.127696  738797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:47.165055  738797 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0919 18:39:47.165128  738797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:39:47.187852  738797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:39:47.212923  738797 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0919 18:39:47.213057  738797 cli_runner.go:164] Run: docker network inspect addons-810228 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:47.228457  738797 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:47.232100  738797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:47.243103  738797 kubeadm.go:883] updating cluster {Name:addons-810228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-810228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:47.243249  738797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:47.243317  738797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:39:47.261663  738797 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:39:47.261687  738797 docker.go:615] Images already preloaded, skipping extraction
	I0919 18:39:47.261768  738797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:39:47.279459  738797 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:39:47.279482  738797 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:47.279492  738797 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0919 18:39:47.279605  738797 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-810228 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-810228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:47.279678  738797 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 18:39:47.328227  738797 cni.go:84] Creating CNI manager for ""
	I0919 18:39:47.328253  738797 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:39:47.328264  738797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:47.328285  738797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-810228 NodeName:addons-810228 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:47.328434  738797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-810228"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:47.328508  738797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:47.337489  738797 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:47.337562  738797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:47.346497  738797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 18:39:47.364214  738797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:47.383061  738797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0919 18:39:47.400752  738797 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:47.404035  738797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:47.414649  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:47.508776  738797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:47.527707  738797 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228 for IP: 192.168.49.2
	I0919 18:39:47.527731  738797 certs.go:194] generating shared ca certs ...
	I0919 18:39:47.527773  738797 certs.go:226] acquiring lock for ca certs: {Name:mkd15cc829a7fa3f9965faa1d82fa6a7c42cfbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:47.527948  738797 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-732615/.minikube/ca.key
	I0919 18:39:47.880734  738797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-732615/.minikube/ca.crt ...
	I0919 18:39:47.880765  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/ca.crt: {Name:mk2027b5daa4fa31cae09f9fb635340e3fe48298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:47.881380  738797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-732615/.minikube/ca.key ...
	I0919 18:39:47.881398  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/ca.key: {Name:mkb2da3a0564b610f83227e88fa55789e23cd4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:47.881503  738797 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.key
	I0919 18:39:48.508154  738797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.crt ...
	I0919 18:39:48.508189  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.crt: {Name:mkf2b766b0b62ef51b52902a72f8e70cf3557784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:48.508391  738797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.key ...
	I0919 18:39:48.508405  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.key: {Name:mke9675c79ef0c651651098df3d1259fdfc9953f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:48.508492  738797 certs.go:256] generating profile certs ...
	I0919 18:39:48.508552  738797 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.key
	I0919 18:39:48.508570  738797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt with IP's: []
	I0919 18:39:48.638241  738797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt ...
	I0919 18:39:48.638275  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: {Name:mk2a0d69831cbbc47aa4123ed129a30525733375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:48.638947  738797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.key ...
	I0919 18:39:48.638963  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.key: {Name:mk51a3e65a8f5fd4ccfa6ad945f4f88e0676ff63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:48.639050  738797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key.a475a18e
	I0919 18:39:48.639067  738797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt.a475a18e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:49.003138  738797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt.a475a18e ...
	I0919 18:39:49.003173  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt.a475a18e: {Name:mk5b22a339bce7a65430a352f5cf810de4de04c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:49.003823  738797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key.a475a18e ...
	I0919 18:39:49.003846  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key.a475a18e: {Name:mk9f6268e9b661c7546982e2981375cc46e4e129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:49.004496  738797 certs.go:381] copying /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt.a475a18e -> /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt
	I0919 18:39:49.004592  738797 certs.go:385] copying /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key.a475a18e -> /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key
	I0919 18:39:49.004655  738797 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.key
	I0919 18:39:49.004681  738797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.crt with IP's: []
	I0919 18:39:49.475894  738797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.crt ...
	I0919 18:39:49.475928  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.crt: {Name:mk5fd9ed7d156ab38737cd2db1157e82e2871ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:49.476725  738797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.key ...
	I0919 18:39:49.476755  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.key: {Name:mk68716098a2e7f77b5af02b455889b7e44c9692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:49.476962  738797 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:49.477003  738797 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:39:49.477040  738797 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:49.477069  738797 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-732615/.minikube/certs/key.pem (1675 bytes)
	I0919 18:39:49.477904  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:49.503933  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:49.529203  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:49.553346  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:49.577838  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:49.601819  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:49.626162  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:49.649946  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:39:49.673984  738797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-732615/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:49.699029  738797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:49.718668  738797 ssh_runner.go:195] Run: openssl version
	I0919 18:39:49.724404  738797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:49.734237  738797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:49.738089  738797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:49.738183  738797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:49.745467  738797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:49.755307  738797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:49.758862  738797 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:49.758914  738797 kubeadm.go:392] StartCluster: {Name:addons-810228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-810228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:49.759045  738797 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 18:39:49.775664  738797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:49.784448  738797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:49.793420  738797 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:49.793483  738797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:49.802199  738797 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:49.802220  738797 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:49.802283  738797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:49.816648  738797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:49.816741  738797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:49.825514  738797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:49.835309  738797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:49.835404  738797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:49.843998  738797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:49.853360  738797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:49.853455  738797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:49.861975  738797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:49.871209  738797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:49.871302  738797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:49.881516  738797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:49.927528  738797 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:49.927619  738797 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:49.955543  738797 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:49.955639  738797 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0919 18:39:49.955693  738797 kubeadm.go:310] OS: Linux
	I0919 18:39:49.955757  738797 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:49.955825  738797 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:49.955892  738797 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:49.955980  738797 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:49.956061  738797 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:49.956140  738797 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:49.956223  738797 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:49.956306  738797 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:49.956376  738797 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:50.026894  738797 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:50.027026  738797 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:50.027147  738797 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:50.041968  738797 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:50.046678  738797 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:50.046871  738797 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:50.046967  738797 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:50.461348  738797 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:51.009565  738797 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:51.255240  738797 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:52.024840  738797 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:52.257388  738797 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:52.257774  738797 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-810228 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:52.782865  738797 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:52.783204  738797 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-810228 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:53.063567  738797 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:53.476295  738797 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:54.288142  738797 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:54.288436  738797 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:54.521811  738797 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:54.889781  738797 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:55.436015  738797 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:56.354496  738797 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:56.513602  738797 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:56.514248  738797 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:56.517267  738797 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:56.521345  738797 out.go:235]   - Booting up control plane ...
	I0919 18:39:56.521458  738797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:56.521538  738797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:56.521608  738797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:56.536783  738797 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:56.543352  738797 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:56.543416  738797 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:56.651613  738797 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:56.651738  738797 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:58.651216  738797 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001559052s
	I0919 18:39:58.651303  738797 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:05.152906  738797 kubeadm.go:310] [api-check] The API server is healthy after 6.501531943s
	I0919 18:40:05.173078  738797 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:05.187401  738797 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:05.211028  738797 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:05.211250  738797 kubeadm.go:310] [mark-control-plane] Marking the node addons-810228 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:05.221537  738797 kubeadm.go:310] [bootstrap-token] Using token: 083xy0.8ammfpt2wo1kksxc
	I0919 18:40:05.223572  738797 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:05.223710  738797 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:05.231458  738797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:05.239223  738797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:05.243021  738797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:05.248527  738797 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:05.252396  738797 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:05.559942  738797 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:05.986815  738797 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:06.560046  738797 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:06.561408  738797 kubeadm.go:310] 
	I0919 18:40:06.561486  738797 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:06.561499  738797 kubeadm.go:310] 
	I0919 18:40:06.561578  738797 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:06.561587  738797 kubeadm.go:310] 
	I0919 18:40:06.561613  738797 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:06.561674  738797 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:06.561728  738797 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:06.561736  738797 kubeadm.go:310] 
	I0919 18:40:06.561790  738797 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:06.561798  738797 kubeadm.go:310] 
	I0919 18:40:06.561845  738797 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:06.561853  738797 kubeadm.go:310] 
	I0919 18:40:06.561904  738797 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:06.561991  738797 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:06.562082  738797 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:06.562091  738797 kubeadm.go:310] 
	I0919 18:40:06.562174  738797 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:06.562253  738797 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:06.562264  738797 kubeadm.go:310] 
	I0919 18:40:06.562351  738797 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 083xy0.8ammfpt2wo1kksxc \
	I0919 18:40:06.562456  738797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00199610007e05caaa6b3b5ce7b116886bfcdfea16b5b13af731d9b283172a41 \
	I0919 18:40:06.562481  738797 kubeadm.go:310] 	--control-plane 
	I0919 18:40:06.562490  738797 kubeadm.go:310] 
	I0919 18:40:06.562573  738797 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:06.562582  738797 kubeadm.go:310] 
	I0919 18:40:06.562662  738797 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 083xy0.8ammfpt2wo1kksxc \
	I0919 18:40:06.562762  738797 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00199610007e05caaa6b3b5ce7b116886bfcdfea16b5b13af731d9b283172a41 
	I0919 18:40:06.565698  738797 kubeadm.go:310] W0919 18:39:49.924076    1816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:06.565993  738797 kubeadm.go:310] W0919 18:39:49.924955    1816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:06.566224  738797 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0919 18:40:06.566334  738797 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:06.566355  738797 cni.go:84] Creating CNI manager for ""
	I0919 18:40:06.566380  738797 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:40:06.568445  738797 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:40:06.570257  738797 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:40:06.579626  738797 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 18:40:06.601396  738797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:06.601526  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:06.601605  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-810228 minikube.k8s.io/updated_at=2024_09_19T18_40_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-810228 minikube.k8s.io/primary=true
	I0919 18:40:06.746895  738797 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:06.747053  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:07.247343  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:07.747969  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:08.247866  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:08.747777  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:09.247454  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:09.747170  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:10.247390  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:10.747609  738797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:10.883978  738797 kubeadm.go:1113] duration metric: took 4.282500903s to wait for elevateKubeSystemPrivileges
	I0919 18:40:10.884014  738797 kubeadm.go:394] duration metric: took 21.125104395s to StartCluster
	I0919 18:40:10.884035  738797 settings.go:142] acquiring lock: {Name:mke3f95111e4ca2f9d5245ea7f2c8e6c113288ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:10.884823  738797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:40:10.885224  738797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-732615/kubeconfig: {Name:mk128c87ff0a219fb59fab900f5934625428ac86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:10.885805  738797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:10.885825  738797 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:40:10.886101  738797 config.go:182] Loaded profile config "addons-810228": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:40:10.886153  738797 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:10.886231  738797 addons.go:69] Setting yakd=true in profile "addons-810228"
	I0919 18:40:10.886245  738797 addons.go:234] Setting addon yakd=true in "addons-810228"
	I0919 18:40:10.886269  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.886722  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.887274  738797 addons.go:69] Setting cloud-spanner=true in profile "addons-810228"
	I0919 18:40:10.887292  738797 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-810228"
	I0919 18:40:10.887313  738797 addons.go:69] Setting registry=true in profile "addons-810228"
	I0919 18:40:10.887330  738797 addons.go:234] Setting addon registry=true in "addons-810228"
	I0919 18:40:10.887351  738797 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-810228"
	I0919 18:40:10.887355  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.887375  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.887775  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.887821  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.890129  738797 addons.go:69] Setting default-storageclass=true in profile "addons-810228"
	I0919 18:40:10.890161  738797 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-810228"
	I0919 18:40:10.890377  738797 addons.go:69] Setting storage-provisioner=true in profile "addons-810228"
	I0919 18:40:10.890669  738797 addons.go:234] Setting addon storage-provisioner=true in "addons-810228"
	I0919 18:40:10.890704  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.887297  738797 addons.go:234] Setting addon cloud-spanner=true in "addons-810228"
	I0919 18:40:10.890883  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.891096  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.893381  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.890496  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.887281  738797 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-810228"
	I0919 18:40:10.890505  738797 addons.go:69] Setting gcp-auth=true in profile "addons-810228"
	I0919 18:40:10.890513  738797 addons.go:69] Setting ingress=true in profile "addons-810228"
	I0919 18:40:10.890521  738797 addons.go:69] Setting ingress-dns=true in profile "addons-810228"
	I0919 18:40:10.890527  738797 addons.go:69] Setting inspektor-gadget=true in profile "addons-810228"
	I0919 18:40:10.890535  738797 addons.go:69] Setting metrics-server=true in profile "addons-810228"
	I0919 18:40:10.890575  738797 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:10.890616  738797 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-810228"
	I0919 18:40:10.890623  738797 addons.go:69] Setting volcano=true in profile "addons-810228"
	I0919 18:40:10.890630  738797 addons.go:69] Setting volumesnapshots=true in profile "addons-810228"
	I0919 18:40:10.895328  738797 addons.go:234] Setting addon volumesnapshots=true in "addons-810228"
	I0919 18:40:10.895373  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.895842  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.898756  738797 addons.go:234] Setting addon ingress-dns=true in "addons-810228"
	I0919 18:40:10.898860  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.899507  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.903488  738797 addons.go:234] Setting addon inspektor-gadget=true in "addons-810228"
	I0919 18:40:10.903545  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.904022  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.923751  738797 addons.go:234] Setting addon metrics-server=true in "addons-810228"
	I0919 18:40:10.923810  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.924290  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.942162  738797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:10.942269  738797 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-810228"
	I0919 18:40:10.942629  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.948153  738797 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-810228"
	I0919 18:40:10.948215  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.948698  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.971221  738797 mustload.go:65] Loading cluster: addons-810228
	I0919 18:40:10.971433  738797 config.go:182] Loaded profile config "addons-810228": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:40:10.971691  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.972631  738797 addons.go:234] Setting addon volcano=true in "addons-810228"
	I0919 18:40:10.972691  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.973141  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:10.996872  738797 addons.go:234] Setting addon ingress=true in "addons-810228"
	I0919 18:40:10.996941  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:10.997443  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:11.108363  738797 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:11.129134  738797 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:40:11.131386  738797 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:11.131704  738797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:11.131719  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:11.131781  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.131950  738797 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:11.139299  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:11.139419  738797 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:11.140005  738797 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:11.140082  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.156318  738797 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:11.156342  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:40:11.156408  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.160582  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:11.160819  738797 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:11.162834  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:11.162917  738797 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:11.162972  738797 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:11.166355  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:11.166385  738797 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:11.166459  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.171382  738797 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:11.171615  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:11.171816  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.181558  738797 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:11.181587  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:11.181649  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.196789  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:11.207364  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:11.209318  738797 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-810228"
	I0919 18:40:11.209358  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:11.209777  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:11.212082  738797 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:11.212104  738797 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:11.212166  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.228509  738797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:11.231389  738797 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:11.229027  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:11.231342  738797 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0919 18:40:11.233637  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:11.233777  738797 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:11.233788  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:11.233849  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.252043  738797 addons.go:234] Setting addon default-storageclass=true in "addons-810228"
	I0919 18:40:11.252091  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:11.252504  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:11.266339  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:11.268578  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:11.270342  738797 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:11.275324  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:11.275355  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:11.275451  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.282358  738797 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0919 18:40:11.287142  738797 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0919 18:40:11.290659  738797 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:40:11.290686  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0919 18:40:11.290756  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.314757  738797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:40:11.323069  738797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:11.325872  738797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:11.333340  738797 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:11.333371  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:40:11.333440  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.398684  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.404323  738797 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:11.404552  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.406887  738797 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:11.406912  738797 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:11.406980  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.411305  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.414068  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.414791  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.422916  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.457459  738797 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:11.458823  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.463635  738797 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:11.465778  738797 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:11.465802  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:11.465872  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.473155  738797 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:11.473176  738797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:11.473240  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:11.493638  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.516452  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.529103  738797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:11.540546  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.540631  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	W0919 18:40:11.555457  738797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:40:11.555499  738797 retry.go:31] will retry after 166.506311ms: ssh: handshake failed: EOF
	I0919 18:40:11.561960  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.570343  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:11.596547  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	W0919 18:40:11.598373  738797 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:40:11.598401  738797 retry.go:31] will retry after 332.661085ms: ssh: handshake failed: EOF
	I0919 18:40:11.839445  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:11.925092  738797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:11.925122  738797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:11.978654  738797 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:11.978682  738797 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:12.047382  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:12.057930  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:12.067240  738797 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:12.067284  738797 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:12.142252  738797 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:12.142282  738797 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:12.149950  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:40:12.188777  738797 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:12.188805  738797 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:12.190928  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:12.223836  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:12.228802  738797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:12.228830  738797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:12.269782  738797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:12.269807  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:12.283218  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:12.294946  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:12.294974  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:12.307428  738797 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:12.307452  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:12.328469  738797 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:12.328508  738797 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:12.386794  738797 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:12.386836  738797 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:12.394484  738797 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:12.394531  738797 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:12.414970  738797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:12.415011  738797 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:12.450068  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:12.450097  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:12.478016  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:12.527226  738797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:12.527254  738797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:12.542340  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:12.612037  738797 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:12.612079  738797 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:12.622592  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:12.622631  738797 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:12.660201  738797 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:12.660234  738797 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:12.669699  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:12.669728  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:12.826153  738797 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:12.826184  738797 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:12.941721  738797 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.713149238s)
	I0919 18:40:12.941765  738797 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:12.942470  738797 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.413342029s)
	I0919 18:40:12.944053  738797 node_ready.go:35] waiting up to 6m0s for node "addons-810228" to be "Ready" ...
	I0919 18:40:12.952660  738797 node_ready.go:49] node "addons-810228" has status "Ready":"True"
	I0919 18:40:12.952689  738797 node_ready.go:38] duration metric: took 8.61246ms for node "addons-810228" to be "Ready" ...
	I0919 18:40:12.952700  738797 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:12.961946  738797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:12.996672  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:13.098533  738797 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:13.098559  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:13.105821  738797 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:13.105853  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:13.363234  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:13.363264  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:13.446892  738797 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-810228" context rescaled to 1 replicas
	I0919 18:40:13.612462  738797 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:13.612503  738797 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:13.654862  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:13.727260  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:13.873745  738797 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:13.873775  738797 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:14.014797  738797 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:14.014825  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:14.140050  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:14.140085  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:14.354802  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:14.354844  738797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:14.359572  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:14.860520  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.021029735s)
	I0919 18:40:14.867416  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:14.867444  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:14.968839  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:15.192242  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:15.192276  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:15.491282  738797 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:15.491309  738797 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:15.836917  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:15.989863  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.942444685s)
	I0919 18:40:16.968896  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:18.245189  738797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:18.245275  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:18.279490  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:19.260053  738797 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:19.499945  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:19.818085  738797 addons.go:234] Setting addon gcp-auth=true in "addons-810228"
	I0919 18:40:19.818215  738797 host.go:66] Checking if "addons-810228" exists ...
	I0919 18:40:19.818742  738797 cli_runner.go:164] Run: docker container inspect addons-810228 --format={{.State.Status}}
	I0919 18:40:19.840245  738797 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:19.840300  738797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-810228
	I0919 18:40:19.866140  738797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/addons-810228/id_rsa Username:docker}
	I0919 18:40:21.575879  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.51790631s)
	I0919 18:40:21.575984  738797 addons.go:475] Verifying addon ingress=true in "addons-810228"
	I0919 18:40:21.578656  738797 out.go:177] * Verifying ingress addon...
	I0919 18:40:21.581123  738797 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:40:21.586393  738797 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:40:21.586570  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.044127  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:22.168352  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.642716  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.095340  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.482977  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.332975734s)
	I0919 18:40:23.483056  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.292106431s)
	I0919 18:40:23.483098  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.259238245s)
	I0919 18:40:23.483342  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.200101515s)
	I0919 18:40:23.483422  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.005374556s)
	I0919 18:40:23.483606  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.941233023s)
	I0919 18:40:23.483691  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.486986039s)
	I0919 18:40:23.483714  738797 addons.go:475] Verifying addon metrics-server=true in "addons-810228"
	I0919 18:40:23.483793  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.828903433s)
	W0919 18:40:23.483818  738797 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:23.483838  738797 retry.go:31] will retry after 314.994357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:23.483883  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.756598988s)
	I0919 18:40:23.484024  738797 addons.go:475] Verifying addon registry=true in "addons-810228"
	I0919 18:40:23.484402  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.124788452s)
	I0919 18:40:23.486152  738797 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-810228 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:23.486238  738797 out.go:177] * Verifying registry addon...
	I0919 18:40:23.489298  738797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:23.586460  738797 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:23.586489  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 18:40:23.630346  738797 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:40:23.672983  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.799568  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:24.048575  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.077187  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:24.226725  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.496737  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.620825  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.752613  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.915641508s)
	I0919 18:40:24.752692  738797 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-810228"
	I0919 18:40:24.752945  738797 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.912679313s)
	I0919 18:40:24.755534  738797 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:24.755651  738797 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:24.761323  738797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:24.763776  738797 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:24.766569  738797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:24.766744  738797 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:24.768548  738797 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:24.768578  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.883264  738797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:24.883291  738797 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:40:24.988595  738797 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:24.988626  738797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:40:24.994071  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.081667  738797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:25.095500  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.268303  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.494690  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.586179  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.766346  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.993909  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.086541  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.270515  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.468712  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:26.495486  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.597256  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.607996  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.808275419s)
	I0919 18:40:26.608122  738797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.526429579s)
	I0919 18:40:26.611238  738797 addons.go:475] Verifying addon gcp-auth=true in "addons-810228"
	I0919 18:40:26.614259  738797 out.go:177] * Verifying gcp-auth addon...
	I0919 18:40:26.617270  738797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:40:26.621053  738797 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:40:26.766837  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.993538  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.087464  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.266536  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.493653  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.585848  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.766864  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.993327  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.086226  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.267070  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.493718  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.586217  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.767376  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.968975  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:28.995012  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.095862  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.266956  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.493929  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.586298  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.766542  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.993623  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.094168  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.266371  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.495141  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.594630  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.766760  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.994374  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.095899  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.269395  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.468574  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:31.494074  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.597438  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.766236  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.994044  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.085621  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.266462  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.493317  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.586071  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.766277  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.993898  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.086495  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.266253  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.468875  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:33.493869  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.586431  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.766354  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.994021  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.085653  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.266656  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.493871  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.586206  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.766493  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.993865  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.087083  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.266068  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.469440  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:35.493302  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.585761  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.774468  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.993680  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.086481  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.266920  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.492908  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.586901  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.766358  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.994065  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.089612  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.267158  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.493634  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.585993  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.765974  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.968830  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:37.993573  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.091258  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.266808  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.494707  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.597530  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.766067  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.993514  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.086390  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.267616  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.494163  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.585436  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.766817  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.994326  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.086871  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.266482  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.505471  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:40.515244  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.589532  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.767148  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.994589  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.085987  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.266973  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.494456  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.586730  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.766546  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.993797  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.087150  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.267698  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.494077  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.586104  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.770813  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.968029  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:42.993535  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.085829  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.267722  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.492737  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.586801  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.766306  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.993537  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.086036  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.266458  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.493281  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.586004  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.766971  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.969645  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:44.995890  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.140900  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.267927  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.493858  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.586109  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.766611  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.994098  738797 kapi.go:107] duration metric: took 22.504800911s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:40:46.085734  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.266833  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.586742  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.768303  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.086828  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.266633  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.468995  738797 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:47.586232  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.766366  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.086850  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.266950  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.587183  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.779398  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.973126  738797 pod_ready.go:93] pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:48.973209  738797 pod_ready.go:82] duration metric: took 36.011221383s for pod "coredns-7c65d6cfc9-fhvjq" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.973237  738797 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kndrw" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.977101  738797 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-kndrw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kndrw" not found
	I0919 18:40:48.977169  738797 pod_ready.go:82] duration metric: took 3.910753ms for pod "coredns-7c65d6cfc9-kndrw" in "kube-system" namespace to be "Ready" ...
	E0919 18:40:48.977195  738797 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-kndrw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kndrw" not found
	I0919 18:40:48.977217  738797 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.983832  738797 pod_ready.go:93] pod "etcd-addons-810228" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:48.983921  738797 pod_ready.go:82] duration metric: took 6.671233ms for pod "etcd-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.983957  738797 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.992816  738797 pod_ready.go:93] pod "kube-apiserver-addons-810228" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:48.992889  738797 pod_ready.go:82] duration metric: took 8.891533ms for pod "kube-apiserver-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:48.992916  738797 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.013156  738797 pod_ready.go:93] pod "kube-controller-manager-addons-810228" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:49.013181  738797 pod_ready.go:82] duration metric: took 20.244011ms for pod "kube-controller-manager-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.013194  738797 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c9r5f" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.089394  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.166508  738797 pod_ready.go:93] pod "kube-proxy-c9r5f" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:49.166539  738797 pod_ready.go:82] duration metric: took 153.337916ms for pod "kube-proxy-c9r5f" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.166549  738797 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.268582  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.568930  738797 pod_ready.go:93] pod "kube-scheduler-addons-810228" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:49.568956  738797 pod_ready.go:82] duration metric: took 402.39829ms for pod "kube-scheduler-addons-810228" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:49.568965  738797 pod_ready.go:39] duration metric: took 36.616254312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:49.568993  738797 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:40:49.569067  738797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:49.592963  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.640590  738797 api_server.go:72] duration metric: took 38.754733429s to wait for apiserver process to appear ...
	I0919 18:40:49.640627  738797 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:40:49.640649  738797 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:40:49.652040  738797 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:40:49.653518  738797 api_server.go:141] control plane version: v1.31.1
	I0919 18:40:49.653548  738797 api_server.go:131] duration metric: took 12.91271ms to wait for apiserver health ...
	I0919 18:40:49.653558  738797 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:40:49.767801  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.777749  738797 system_pods.go:59] 17 kube-system pods found
	I0919 18:40:49.777789  738797 system_pods.go:61] "coredns-7c65d6cfc9-fhvjq" [dd8b0777-d8b4-4ca1-a17f-9882d8125c02] Running
	I0919 18:40:49.777802  738797 system_pods.go:61] "csi-hostpath-attacher-0" [9f7d35fc-a216-46b1-b985-806564137d3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:49.777810  738797 system_pods.go:61] "csi-hostpath-resizer-0" [c8ae5991-891a-4332-a3fe-a59832909de7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:49.777820  738797 system_pods.go:61] "csi-hostpathplugin-mwch8" [e053f2d9-ad13-4ed9-9897-bb280f40436a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:49.777826  738797 system_pods.go:61] "etcd-addons-810228" [658e4d40-1d03-49d8-823d-dbd45b05f468] Running
	I0919 18:40:49.777832  738797 system_pods.go:61] "kube-apiserver-addons-810228" [3a880ebe-50d7-4a64-8b62-4dc847ad683d] Running
	I0919 18:40:49.777840  738797 system_pods.go:61] "kube-controller-manager-addons-810228" [571fc519-faaf-4855-b16c-14d86a2e2663] Running
	I0919 18:40:49.777845  738797 system_pods.go:61] "kube-ingress-dns-minikube" [d18a22d5-16b5-476b-a6b0-8d6709de3aa0] Running
	I0919 18:40:49.777849  738797 system_pods.go:61] "kube-proxy-c9r5f" [eba6e1b3-80b2-4365-bd5e-a3081b58bf46] Running
	I0919 18:40:49.777862  738797 system_pods.go:61] "kube-scheduler-addons-810228" [4e0ea48f-d14f-4966-8892-b5cf588bada9] Running
	I0919 18:40:49.777869  738797 system_pods.go:61] "metrics-server-84c5f94fbc-lxrnr" [29562cd5-ea79-4850-8f5a-af67127f08a7] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:40:49.777881  738797 system_pods.go:61] "nvidia-device-plugin-daemonset-gtn9v" [5894bc48-dcaa-4555-a48a-4648d0d4a6c4] Running
	I0919 18:40:49.777889  738797 system_pods.go:61] "registry-66c9cd494c-fd6l9" [22ee96cc-eafb-42a6-9f00-4d14a9bbfa5a] Running
	I0919 18:40:49.777900  738797 system_pods.go:61] "registry-proxy-8mf99" [7567e96d-94ff-4199-aa1d-8f7b62234e4d] Running
	I0919 18:40:49.777908  738797 system_pods.go:61] "snapshot-controller-56fcc65765-7rhtc" [6305125f-9acf-43f3-9cb2-e5b158474a0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:49.777923  738797 system_pods.go:61] "snapshot-controller-56fcc65765-g9gnd" [16c9d372-2509-4a83-877d-a00fb10ec6c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:49.777928  738797 system_pods.go:61] "storage-provisioner" [f7b250ff-a723-4ea4-87fd-8682f7488768] Running
	I0919 18:40:49.777935  738797 system_pods.go:74] duration metric: took 124.371489ms to wait for pod list to return data ...
	I0919 18:40:49.777947  738797 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:40:49.966755  738797 default_sa.go:45] found service account: "default"
	I0919 18:40:49.966782  738797 default_sa.go:55] duration metric: took 188.828197ms for default service account to be created ...
	I0919 18:40:49.966791  738797 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:40:50.089180  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.179215  738797 system_pods.go:86] 17 kube-system pods found
	I0919 18:40:50.179252  738797 system_pods.go:89] "coredns-7c65d6cfc9-fhvjq" [dd8b0777-d8b4-4ca1-a17f-9882d8125c02] Running
	I0919 18:40:50.179264  738797 system_pods.go:89] "csi-hostpath-attacher-0" [9f7d35fc-a216-46b1-b985-806564137d3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:50.179271  738797 system_pods.go:89] "csi-hostpath-resizer-0" [c8ae5991-891a-4332-a3fe-a59832909de7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:50.179280  738797 system_pods.go:89] "csi-hostpathplugin-mwch8" [e053f2d9-ad13-4ed9-9897-bb280f40436a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:50.179286  738797 system_pods.go:89] "etcd-addons-810228" [658e4d40-1d03-49d8-823d-dbd45b05f468] Running
	I0919 18:40:50.179291  738797 system_pods.go:89] "kube-apiserver-addons-810228" [3a880ebe-50d7-4a64-8b62-4dc847ad683d] Running
	I0919 18:40:50.179297  738797 system_pods.go:89] "kube-controller-manager-addons-810228" [571fc519-faaf-4855-b16c-14d86a2e2663] Running
	I0919 18:40:50.179302  738797 system_pods.go:89] "kube-ingress-dns-minikube" [d18a22d5-16b5-476b-a6b0-8d6709de3aa0] Running
	I0919 18:40:50.179306  738797 system_pods.go:89] "kube-proxy-c9r5f" [eba6e1b3-80b2-4365-bd5e-a3081b58bf46] Running
	I0919 18:40:50.179318  738797 system_pods.go:89] "kube-scheduler-addons-810228" [4e0ea48f-d14f-4966-8892-b5cf588bada9] Running
	I0919 18:40:50.179324  738797 system_pods.go:89] "metrics-server-84c5f94fbc-lxrnr" [29562cd5-ea79-4850-8f5a-af67127f08a7] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:40:50.179342  738797 system_pods.go:89] "nvidia-device-plugin-daemonset-gtn9v" [5894bc48-dcaa-4555-a48a-4648d0d4a6c4] Running
	I0919 18:40:50.179347  738797 system_pods.go:89] "registry-66c9cd494c-fd6l9" [22ee96cc-eafb-42a6-9f00-4d14a9bbfa5a] Running
	I0919 18:40:50.179351  738797 system_pods.go:89] "registry-proxy-8mf99" [7567e96d-94ff-4199-aa1d-8f7b62234e4d] Running
	I0919 18:40:50.179359  738797 system_pods.go:89] "snapshot-controller-56fcc65765-7rhtc" [6305125f-9acf-43f3-9cb2-e5b158474a0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:50.179370  738797 system_pods.go:89] "snapshot-controller-56fcc65765-g9gnd" [16c9d372-2509-4a83-877d-a00fb10ec6c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:50.179375  738797 system_pods.go:89] "storage-provisioner" [f7b250ff-a723-4ea4-87fd-8682f7488768] Running
	I0919 18:40:50.179384  738797 system_pods.go:126] duration metric: took 212.583586ms to wait for k8s-apps to be running ...
	I0919 18:40:50.179399  738797 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:40:50.179462  738797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:40:50.209824  738797 system_svc.go:56] duration metric: took 30.416415ms WaitForService to wait for kubelet
	I0919 18:40:50.209854  738797 kubeadm.go:582] duration metric: took 39.324002757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:50.209873  738797 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:40:50.266813  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.368163  738797 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 18:40:50.368205  738797 node_conditions.go:123] node cpu capacity is 2
	I0919 18:40:50.368219  738797 node_conditions.go:105] duration metric: took 158.339996ms to run NodePressure ...
	I0919 18:40:50.368232  738797 start.go:241] waiting for startup goroutines ...
	I0919 18:40:50.589504  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.770063  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.086904  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.268217  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.586284  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.785457  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.086100  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.267883  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.586557  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.785621  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.086393  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.267563  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.586477  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.767972  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.090146  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.267777  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.586468  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.768067  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.087345  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.267496  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.585573  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.766675  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.087006  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.267043  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.586033  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.765916  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.086080  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.267123  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.586210  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.768008  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.085751  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.265713  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.586201  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.766283  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.085714  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.265638  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.588216  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.767997  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.094924  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.303244  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.586283  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.767077  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.085795  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.267492  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.586286  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.774538  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.086751  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.288131  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.586567  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.767818  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.085946  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.266118  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.587294  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.767679  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.086080  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.267655  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.585803  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.766690  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.088306  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.266207  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.586324  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.767140  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.085972  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.267343  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.585474  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.767287  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.085907  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.266356  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.586413  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.766636  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.085804  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.266450  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.586461  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.766516  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.087557  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.266956  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.586122  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.767060  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.108651  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.266819  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.588221  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.789216  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.087317  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.266005  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.586973  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.781779  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.086287  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.268678  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.586358  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.766031  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.086980  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.267079  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.586194  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.766600  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.087011  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.267153  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.585920  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.770461  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.086874  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.266474  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.586398  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.766620  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.086370  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.266838  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.586951  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.766383  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.086736  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.266231  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.586154  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.766588  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.086240  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.266466  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.587299  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.788054  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.085686  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.266221  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.586025  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.766107  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.086577  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.266795  738797 kapi.go:107] duration metric: took 55.505467385s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:41:20.585445  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.086473  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.585649  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.085784  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.586346  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.085270  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.585491  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.085756  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.586554  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.086460  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.586348  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.085753  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.586693  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.086586  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.586794  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.086000  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.585903  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.087150  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.585969  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.089836  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.585828  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.085549  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.587528  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.086575  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.590401  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.085423  738797 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.588175  738797 kapi.go:107] duration metric: took 1m12.007051217s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:48.621732  738797 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:41:48.621762  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.120804  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.620842  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.122176  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.620938  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.120796  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.620641  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.121067  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.621722  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.121487  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.621384  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.121608  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.622078  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.121444  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.621656  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.121785  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.621513  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.121753  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.620660  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.121482  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.621325  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.120919  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.621174  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.126512  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.621198  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.121332  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.620510  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.121229  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.620994  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.121618  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.621433  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.122079  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.621492  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.121173  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.620622  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.121880  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.620891  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.121125  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.621473  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.121345  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.621340  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.121873  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.621635  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.121731  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.621527  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.121280  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.621417  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.128192  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.621225  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.121308  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.621862  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.120940  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.621550  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.122279  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.620355  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.121323  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.621131  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.121398  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.620880  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.120968  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.620587  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.121073  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.620936  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.125017  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.621731  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.121270  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.621685  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.121839  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.620626  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.121355  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.621116  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.121397  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.621422  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.121481  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.621089  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.121193  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.621550  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.121741  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.621092  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.120872  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.621207  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.121320  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.621369  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.121836  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.620698  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.121606  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.621106  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.121184  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.620900  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.121091  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.621271  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.120878  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.620688  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.122044  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.620473  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:36.121564  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:36.621435  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:37.121396  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:37.625603  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:38.121784  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:38.620178  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:39.120761  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:39.621738  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:40.122604  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:40.620660  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:41.121393  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:41.621443  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:42.122561  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:42.620700  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:43.121235  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:43.620730  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:44.121096  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:44.622145  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:45.125528  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:45.621072  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:46.120851  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:46.621491  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:47.121298  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:47.621728  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:48.121944  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:48.621212  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:49.121900  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:49.621126  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:50.121696  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:50.621251  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:51.120838  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:51.621208  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:52.121358  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:52.621455  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:53.122710  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:53.621669  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:54.121739  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:54.622848  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:55.120702  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:55.621415  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:56.121438  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:56.621371  738797 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:57.122029  738797 kapi.go:107] duration metric: took 2m30.504754094s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:42:57.124207  738797 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-810228 cluster.
	I0919 18:42:57.126899  738797 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:42:57.129031  738797 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:42:57.131356  738797 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, volcano, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0919 18:42:57.133196  738797 addons.go:510] duration metric: took 2m46.247048801s for enable addons: enabled=[ingress-dns storage-provisioner volcano cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0919 18:42:57.133256  738797 start.go:246] waiting for cluster config update ...
	I0919 18:42:57.133278  738797 start.go:255] writing updated cluster config ...
	I0919 18:42:57.133580  738797 ssh_runner.go:195] Run: rm -f paused
	I0919 18:42:57.509948  738797 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:42:57.512697  738797 out.go:177] * Done! kubectl is now configured to use "addons-810228" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 19 18:52:34 addons-810228 dockerd[1285]: time="2024-09-19T18:52:34.606618806Z" level=error msg="Error running exec af43647fe41b1edaaf2ba3e0ebab029c84607c229e35b5d400d6873f84918a57 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 19 18:52:34 addons-810228 dockerd[1285]: time="2024-09-19T18:52:34.683292103Z" level=info msg="ignoring event" container=4c519899e669055d04cef09b9774caeaa356fa69b38f094399170e4bf4b21153 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:40 addons-810228 dockerd[1285]: time="2024-09-19T18:52:40.119006796Z" level=info msg="ignoring event" container=684fac07fa43945ced2ce834c049bb857094af37e0f4b6791c9f5a47a2277e02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:40 addons-810228 dockerd[1285]: time="2024-09-19T18:52:40.136052795Z" level=info msg="ignoring event" container=bce384e9ccb15a775e7de37cf723723802508c0d7608ae009406ca76e418b10f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:40 addons-810228 dockerd[1285]: time="2024-09-19T18:52:40.295520947Z" level=info msg="ignoring event" container=71ae09c5157e7e09786969568eb9e822b68ade3bf78e9b2ce21fa01601150204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:40 addons-810228 dockerd[1285]: time="2024-09-19T18:52:40.331716633Z" level=info msg="ignoring event" container=ba934f78e5031eb7b0d45c1cb59fe2f5433999018beb5dec9a503faebea8d683 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:45 addons-810228 dockerd[1285]: time="2024-09-19T18:52:45.897741711Z" level=info msg="ignoring event" container=4ba81da3c38c2573c57d5e9aa4c5360baf624b09f1c8004416382076d0e26c8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:46 addons-810228 dockerd[1285]: time="2024-09-19T18:52:46.053456132Z" level=info msg="ignoring event" container=4ac3f7a453a4ca8f557b18729790b85775c81bba1d8a64bc0a9c74053ec89313 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:46 addons-810228 cri-dockerd[1543]: time="2024-09-19T18:52:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7788ef97c8b2cfcb82799a1124fe1b6ba198336ee0deead50f4768c507d6d4a1/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 19 18:52:47 addons-810228 dockerd[1285]: time="2024-09-19T18:52:47.019502895Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 19 18:52:47 addons-810228 cri-dockerd[1543]: time="2024-09-19T18:52:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 19 18:52:47 addons-810228 dockerd[1285]: time="2024-09-19T18:52:47.645603123Z" level=info msg="ignoring event" container=0a47f144ae4d0d76e65a149962b59fa3b0bf20cf8f83bce1e845c56ec903d877 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:49 addons-810228 dockerd[1285]: time="2024-09-19T18:52:49.041004732Z" level=info msg="ignoring event" container=7788ef97c8b2cfcb82799a1124fe1b6ba198336ee0deead50f4768c507d6d4a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:50 addons-810228 cri-dockerd[1543]: time="2024-09-19T18:52:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/afe8db0fe10dd46aa334ddfd4de1f06741f02de39b94b8055cb9dff219244ca4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 19 18:52:51 addons-810228 cri-dockerd[1543]: time="2024-09-19T18:52:51Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 19 18:52:51 addons-810228 dockerd[1285]: time="2024-09-19T18:52:51.920976172Z" level=info msg="ignoring event" container=055769b7186a5160d0c9d047433c8b0a0d26f638001cf165544a447753a427f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:53 addons-810228 dockerd[1285]: time="2024-09-19T18:52:53.123777828Z" level=info msg="ignoring event" container=afe8db0fe10dd46aa334ddfd4de1f06741f02de39b94b8055cb9dff219244ca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:54 addons-810228 cri-dockerd[1543]: time="2024-09-19T18:52:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9e7640b64568064564ca9af7e2cd528c10ef8a2f1ba9ef9a1f77f9776df55/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 19 18:52:54 addons-810228 dockerd[1285]: time="2024-09-19T18:52:54.748497206Z" level=info msg="ignoring event" container=c93badaf065477bc172a43aec79cac62f8f38366c2a602560b8a7305ffa9bbc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:54 addons-810228 dockerd[1285]: time="2024-09-19T18:52:54.912539770Z" level=info msg="ignoring event" container=70795d5f64f7ca0f7b242077240055a11f4d9dca39a6c39574447ae7db86b819 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:55 addons-810228 dockerd[1285]: time="2024-09-19T18:52:55.555324007Z" level=info msg="ignoring event" container=6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:55 addons-810228 dockerd[1285]: time="2024-09-19T18:52:55.647005332Z" level=info msg="ignoring event" container=69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:55 addons-810228 dockerd[1285]: time="2024-09-19T18:52:55.790244960Z" level=info msg="ignoring event" container=c90d8e8c633126cc9b3f543870ebe57761a8f8c44202c89024613be0e722b43e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:55 addons-810228 dockerd[1285]: time="2024-09-19T18:52:55.883839827Z" level=info msg="ignoring event" container=a5575b9d005e705fb66b7061c14d3ea2f9d6e98217be2513e3980da6c1b51453 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:56 addons-810228 dockerd[1285]: time="2024-09-19T18:52:56.349227477Z" level=info msg="ignoring event" container=8cf9e7640b64568064564ca9af7e2cd528c10ef8a2f1ba9ef9a1f77f9776df55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c93badaf06547       fc9db2894f4e4                                                                                                                2 seconds ago       Exited              helper-pod                0                   8cf9e7640b645       helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7
	4c519899e6690       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            25 seconds ago      Exited              gadget                    7                   f41180999da98       gadget-d957r
	cdc7af92d1409       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   52de0e85053bf       gcp-auth-89d5ffd79-fqk56
	297ecf5004f42       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   b8ea292b9a43d       ingress-nginx-controller-bc57996ff-g9mmb
	3f1fba29df758       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   7c140d53b1f0c       ingress-nginx-admission-patch-spdtf
	46fb1fc2b4230       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   f74a2c2417809       ingress-nginx-admission-create-q2rdm
	9ee2f8cf9e70c       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   55aa60c4376b7       metrics-server-84c5f94fbc-lxrnr
	8461de00e71d0       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   e898337d909c3       local-path-provisioner-86d989889c-d94gx
	0bb8a3fe2ee09       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   36d195b2b0cb3       kube-ingress-dns-minikube
	b143243dd44ca       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   17079649bc3af       cloud-spanner-emulator-769b77f747-dvfn8
	819d5bc52458c       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   62ef32b86de73       storage-provisioner
	13dd9e42645de       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   c46b1268ecaad       coredns-7c65d6cfc9-fhvjq
	d4ca805f7bf16       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   afe2fec0aae59       kube-proxy-c9r5f
	4a2fcde9a7791       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   983e4a4cf8834       kube-apiserver-addons-810228
	0cbc4c46b15cc       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   353f7b474616d       kube-controller-manager-addons-810228
	b7b8965a96a96       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   a396e893fa74e       kube-scheduler-addons-810228
	102164d699fc3       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   a72f17815d889       etcd-addons-810228
	
	
	==> controller_ingress [297ecf5004f4] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0919 18:41:32.209304       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0919 18:41:33.212026       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0919 18:41:33.245746       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0919 18:41:33.261688       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0919 18:41:33.270055       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"74981e39-5147-4142-b172-34339ed4c5ce", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0919 18:41:33.270333       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0d93ed7d-e7d8-4665-9b76-76d8af2ab750", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0919 18:41:33.270933       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d56f8c84-27b7-4290-9d8f-9d45671fd6da", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0919 18:41:34.464021       6 nginx.go:317] "Starting NGINX process"
	I0919 18:41:34.464104       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0919 18:41:34.464981       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0919 18:41:34.465201       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0919 18:41:34.491615       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0919 18:41:34.493973       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-g9mmb"
	I0919 18:41:34.510037       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-g9mmb" node="addons-810228"
	I0919 18:41:34.520872       6 controller.go:213] "Backend successfully reloaded"
	I0919 18:41:34.520962       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0919 18:41:34.521042       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-g9mmb", UID:"cb17b124-068f-49ef-a6c8-d584e741f910", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [13dd9e42645d] <==
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 10.244.0.7:33456 - 48305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000323001s
	[INFO] 10.244.0.7:33456 - 32700 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000376483s
	[INFO] 10.244.0.7:42632 - 41649 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000192926s
	[INFO] 10.244.0.7:42632 - 18365 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00027387s
	[INFO] 10.244.0.7:56771 - 38561 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128664s
	[INFO] 10.244.0.7:56771 - 21154 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065001s
	[INFO] 10.244.0.7:56223 - 47768 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119147s
	[INFO] 10.244.0.7:56223 - 49050 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081903s
	[INFO] 10.244.0.7:46388 - 46092 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0023023s
	[INFO] 10.244.0.7:46388 - 56330 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001887828s
	[INFO] 10.244.0.7:36760 - 22441 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077423s
	[INFO] 10.244.0.7:36760 - 48552 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096475s
	[INFO] 10.244.0.25:42846 - 59339 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242331s
	[INFO] 10.244.0.25:43610 - 24158 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000082543s
	[INFO] 10.244.0.25:53580 - 4736 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107627s
	[INFO] 10.244.0.25:52146 - 33407 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069342s
	[INFO] 10.244.0.25:39639 - 31453 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082133s
	[INFO] 10.244.0.25:36737 - 53535 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059578s
	[INFO] 10.244.0.25:42051 - 24242 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002307036s
	[INFO] 10.244.0.25:60163 - 27181 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002442905s
	[INFO] 10.244.0.25:44776 - 53895 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001618467s
	[INFO] 10.244.0.25:46538 - 32494 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000787154s
	
	
	==> describe nodes <==
	Name:               addons-810228
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-810228
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-810228
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_40_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-810228
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:40:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-810228
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:52:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:48:47 +0000   Thu, 19 Sep 2024 18:39:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:48:47 +0000   Thu, 19 Sep 2024 18:39:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:48:47 +0000   Thu, 19 Sep 2024 18:39:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:48:47 +0000   Thu, 19 Sep 2024 18:40:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-810228
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3782e1194304d448713f98c4821eebe
	  System UUID:                b8b50c90-300c-4c15-a597-54029fbfe7ca
	  Boot ID:                    d978711d-6560-4648-a132-b62ea922575e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-769b77f747-dvfn8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-d957r                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-fqk56                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-g9mmb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-fhvjq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-810228                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-810228                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-810228       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-c9r5f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-810228                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-lxrnr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-d94gx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-810228 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-810228 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-810228 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-810228 event: Registered Node addons-810228 in Controller
	
	
	==> dmesg <==
	[Sep19 17:36] systemd-journald[216]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Sep19 18:12] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.009249] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.001831] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [102164d699fc] <==
	{"level":"info","ts":"2024-09-19T18:39:59.256852Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T18:39:59.256868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T18:39:59.445361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-19T18:39:59.445411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-19T18:39:59.445435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-19T18:39:59.445451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:59.445457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:59.445467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:59.445474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-19T18:39:59.455475Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-810228 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T18:39:59.455631Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:59.459178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:39:59.459287Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:59.459374Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:59.459395Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:39:59.459433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:39:59.459443Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:39:59.459453Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:39:59.460116Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:39:59.460149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:39:59.461171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-19T18:39:59.467817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:50:00.951304Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1848}
	{"level":"info","ts":"2024-09-19T18:50:01.009815Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1848,"took":"57.464265ms","hash":3003539869,"current-db-size-bytes":8953856,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4890624,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-19T18:50:01.009875Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3003539869,"revision":1848,"compact-revision":-1}
	
	
	==> gcp-auth [cdc7af92d140] <==
	2024/09/19 18:42:56 GCP Auth Webhook started!
	2024/09/19 18:43:14 Ready to marshal response ...
	2024/09/19 18:43:14 Ready to write response ...
	2024/09/19 18:43:14 Ready to marshal response ...
	2024/09/19 18:43:14 Ready to write response ...
	2024/09/19 18:43:38 Ready to marshal response ...
	2024/09/19 18:43:38 Ready to write response ...
	2024/09/19 18:43:39 Ready to marshal response ...
	2024/09/19 18:43:39 Ready to write response ...
	2024/09/19 18:43:39 Ready to marshal response ...
	2024/09/19 18:43:39 Ready to write response ...
	2024/09/19 18:51:54 Ready to marshal response ...
	2024/09/19 18:51:54 Ready to write response ...
	2024/09/19 18:52:04 Ready to marshal response ...
	2024/09/19 18:52:04 Ready to write response ...
	2024/09/19 18:52:23 Ready to marshal response ...
	2024/09/19 18:52:23 Ready to write response ...
	2024/09/19 18:52:46 Ready to marshal response ...
	2024/09/19 18:52:46 Ready to write response ...
	2024/09/19 18:52:46 Ready to marshal response ...
	2024/09/19 18:52:46 Ready to write response ...
	2024/09/19 18:52:53 Ready to marshal response ...
	2024/09/19 18:52:53 Ready to write response ...
	
	
	==> kernel <==
	 18:52:57 up  3:35,  0 users,  load average: 0.36, 0.61, 1.71
	Linux addons-810228 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4a2fcde9a779] <==
	I0919 18:43:29.452175       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:43:29.496459       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0919 18:43:29.578747       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:43:29.904568       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:43:29.970962       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:43:30.054047       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0919 18:43:30.307335       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0919 18:43:30.553013       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0919 18:43:30.578711       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0919 18:43:30.692005       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0919 18:43:30.745877       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0919 18:43:31.054048       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0919 18:43:31.340836       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0919 18:52:11.759839       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 18:52:39.874528       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:39.874796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:39.911505       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:39.911553       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:39.914973       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:39.915004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:39.934948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:39.934993       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:52:40.915662       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:52:41.093691       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0919 18:52:41.093692       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0cbc4c46b15c] <==
	E0919 18:52:41.097602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:42.025429       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:42.025486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:42.129357       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:42.129419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:42.190515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:42.190565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:44.073333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:44.073376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:44.758111       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:44.758159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:44.771623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:44.771665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:47.677046       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:47.677089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:48.800172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:48.800221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:49.566561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:49.566606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:54.297050       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:54.297094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:52:54.588858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="9.978µs"
	W0919 18:52:55.149969       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:55.150016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:52:55.483120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="19.569µs"
	
	
	==> kube-proxy [d4ca805f7bf1] <==
	I0919 18:40:13.840084       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:40:13.948714       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:40:13.948776       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:40:13.993050       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:40:13.993118       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:40:14.001734       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:40:14.002552       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:40:14.002577       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:40:14.016939       1 config.go:199] "Starting service config controller"
	I0919 18:40:14.016999       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:40:14.017025       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:40:14.017030       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:40:14.018197       1 config.go:328] "Starting node config controller"
	I0919 18:40:14.018210       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:40:14.120274       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:40:14.120312       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:40:14.120354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b7b8965a96a9] <==
	W0919 18:40:03.488055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:03.488079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:03.488174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:40:03.488191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:03.488280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:03.488300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.302074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:40:04.302185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.337135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:40:04.337493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.362494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:04.362706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.371181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:40:04.371359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.394620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:04.394876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.496493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:40:04.496762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.580348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:40:04.580558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.589069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:40:04.589326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:04.814636       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:40:04.814691       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0919 18:40:06.677032       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.069356    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5gbld\" (UniqueName: \"kubernetes.io/projected/22ee96cc-eafb-42a6-9f00-4d14a9bbfa5a-kube-api-access-5gbld\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.072553    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7567e96d-94ff-4199-aa1d-8f7b62234e4d-kube-api-access-xntjb" (OuterVolumeSpecName: "kube-api-access-xntjb") pod "7567e96d-94ff-4199-aa1d-8f7b62234e4d" (UID: "7567e96d-94ff-4199-aa1d-8f7b62234e4d"). InnerVolumeSpecName "kube-api-access-xntjb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.142433    2333 scope.go:117] "RemoveContainer" containerID="69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.170055    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xntjb\" (UniqueName: \"kubernetes.io/projected/7567e96d-94ff-4199-aa1d-8f7b62234e4d-kube-api-access-xntjb\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.205731    2333 scope.go:117] "RemoveContainer" containerID="69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: E0919 18:52:56.211599    2333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb" containerID="69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.211683    2333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb"} err="failed to get container status \"69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb\": rpc error: code = Unknown desc = Error response from daemon: No such container: 69fcd04d32e5390745c80dd9192620b6eff502227484e0bdf6465dfba430f1bb"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.211733    2333 scope.go:117] "RemoveContainer" containerID="6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.265642    2333 scope.go:117] "RemoveContainer" containerID="6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: E0919 18:52:56.267615    2333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0" containerID="6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.267668    2333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0"} err="failed to get container status \"6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6bd3ac0d1a024a2fe4b6fa450542646b52a29679bbf8c8d0369ab3f71bf107e0"
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.573413    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf498\" (UniqueName: \"kubernetes.io/projected/4f6432ab-b074-4e10-b3c1-db37df07e679-kube-api-access-zf498\") pod \"4f6432ab-b074-4e10-b3c1-db37df07e679\" (UID: \"4f6432ab-b074-4e10-b3c1-db37df07e679\") "
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.573477    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4f6432ab-b074-4e10-b3c1-db37df07e679-script\") pod \"4f6432ab-b074-4e10-b3c1-db37df07e679\" (UID: \"4f6432ab-b074-4e10-b3c1-db37df07e679\") "
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.573505    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-data\") pod \"4f6432ab-b074-4e10-b3c1-db37df07e679\" (UID: \"4f6432ab-b074-4e10-b3c1-db37df07e679\") "
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.573524    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-gcp-creds\") pod \"4f6432ab-b074-4e10-b3c1-db37df07e679\" (UID: \"4f6432ab-b074-4e10-b3c1-db37df07e679\") "
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.573630    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4f6432ab-b074-4e10-b3c1-db37df07e679" (UID: "4f6432ab-b074-4e10-b3c1-db37df07e679"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.574036    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f6432ab-b074-4e10-b3c1-db37df07e679-script" (OuterVolumeSpecName: "script") pod "4f6432ab-b074-4e10-b3c1-db37df07e679" (UID: "4f6432ab-b074-4e10-b3c1-db37df07e679"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.574071    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-data" (OuterVolumeSpecName: "data") pod "4f6432ab-b074-4e10-b3c1-db37df07e679" (UID: "4f6432ab-b074-4e10-b3c1-db37df07e679"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.578739    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6432ab-b074-4e10-b3c1-db37df07e679-kube-api-access-zf498" (OuterVolumeSpecName: "kube-api-access-zf498") pod "4f6432ab-b074-4e10-b3c1-db37df07e679" (UID: "4f6432ab-b074-4e10-b3c1-db37df07e679"). InnerVolumeSpecName "kube-api-access-zf498". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.674184    2333 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-gcp-creds\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.674223    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zf498\" (UniqueName: \"kubernetes.io/projected/4f6432ab-b074-4e10-b3c1-db37df07e679-kube-api-access-zf498\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.674237    2333 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4f6432ab-b074-4e10-b3c1-db37df07e679-script\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: I0919 18:52:56.674246    2333 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4f6432ab-b074-4e10-b3c1-db37df07e679-data\") on node \"addons-810228\" DevicePath \"\""
	Sep 19 18:52:56 addons-810228 kubelet[2333]: E0919 18:52:56.899988    2333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4678c168-7b7a-41e7-8b5e-eece6e883d5d"
	Sep 19 18:52:57 addons-810228 kubelet[2333]: I0919 18:52:57.202927    2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf9e7640b64568064564ca9af7e2cd528c10ef8a2f1ba9ef9a1f77f9776df55"
	
	
	==> storage-provisioner [819d5bc52458] <==
	I0919 18:40:16.948451       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:16.970851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:16.970912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:16.984682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:16.987407       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-810228_b92812d6-e501-49c4-b552-0c22e14a3568!
	I0919 18:40:16.994623       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5ca5390-1e3a-4a6a-bddb-8c5f0b4317e6", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-810228_b92812d6-e501-49c4-b552-0c22e14a3568 became leader
	I0919 18:40:17.088403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-810228_b92812d6-e501-49c4-b552-0c22e14a3568!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-810228 -n addons-810228
helpers_test.go:261: (dbg) Run:  kubectl --context addons-810228 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-q2rdm ingress-nginx-admission-patch-spdtf helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-810228 describe pod busybox ingress-nginx-admission-create-q2rdm ingress-nginx-admission-patch-spdtf helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-810228 describe pod busybox ingress-nginx-admission-create-q2rdm ingress-nginx-admission-patch-spdtf helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7: exit status 1 (101.905462ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-810228/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:43:39 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6lhw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w6lhw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-810228
	  Normal   Pulling    7m48s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m18s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x20 over 9m18s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-q2rdm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-spdtf" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-810228 describe pod busybox ingress-nginx-admission-create-q2rdm ingress-nginx-admission-patch-spdtf helper-pod-delete-pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.56s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 7.36
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
22 TestOffline 88.94
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 220.24
29 TestAddons/serial/Volcano 41.33
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 19.45
35 TestAddons/parallel/InspektorGadget 11.78
36 TestAddons/parallel/MetricsServer 5.72
39 TestAddons/parallel/CSI 46.09
40 TestAddons/parallel/Headlamp 17.82
41 TestAddons/parallel/CloudSpanner 5.53
42 TestAddons/parallel/LocalPath 51.67
43 TestAddons/parallel/NvidiaDevicePlugin 5.48
44 TestAddons/parallel/Yakd 11.74
45 TestAddons/StoppedEnableDisable 5.99
46 TestCertOptions 38.72
47 TestCertExpiration 267.06
48 TestDockerFlags 43.23
49 TestForceSystemdFlag 59.12
50 TestForceSystemdEnv 41.07
56 TestErrorSpam/setup 33.05
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.11
59 TestErrorSpam/pause 1.45
60 TestErrorSpam/unpause 1.5
61 TestErrorSpam/stop 2.2
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 66.48
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 31.2
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
73 TestFunctional/serial/CacheCmd/cache/add_local 0.95
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 39.32
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.13
84 TestFunctional/serial/LogsFileCmd 1.15
85 TestFunctional/serial/InvalidService 4.48
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 14.63
89 TestFunctional/parallel/DryRun 0.56
90 TestFunctional/parallel/InternationalLanguage 0.24
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 11.75
96 TestFunctional/parallel/AddonsCmd 0.2
97 TestFunctional/parallel/PersistentVolumeClaim 28.44
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.37
102 TestFunctional/parallel/FileSync 0.4
103 TestFunctional/parallel/CertSync 2.18
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
111 TestFunctional/parallel/License 0.35
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.53
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
124 TestFunctional/parallel/ServiceCmd/List 0.61
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
126 TestFunctional/parallel/ProfileCmd/profile_list 0.56
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
130 TestFunctional/parallel/MountCmd/any-port 8.46
131 TestFunctional/parallel/ServiceCmd/Format 0.65
132 TestFunctional/parallel/ServiceCmd/URL 0.42
133 TestFunctional/parallel/MountCmd/specific-port 2.48
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.91
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.21
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.37
142 TestFunctional/parallel/ImageCommands/Setup 0.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
149 TestFunctional/parallel/DockerEnv/bash 1.33
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 130.27
161 TestMultiControlPlane/serial/DeployApp 45.29
162 TestMultiControlPlane/serial/PingHostFromPods 1.75
163 TestMultiControlPlane/serial/AddWorkerNode 28.23
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.19
166 TestMultiControlPlane/serial/CopyFile 20.26
167 TestMultiControlPlane/serial/StopSecondaryNode 11.72
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
169 TestMultiControlPlane/serial/RestartSecondaryNode 70.69
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 159.38
172 TestMultiControlPlane/serial/DeleteSecondaryNode 10.37
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
174 TestMultiControlPlane/serial/StopCluster 32.94
175 TestMultiControlPlane/serial/RestartCluster 104.29
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
177 TestMultiControlPlane/serial/AddSecondaryNode 46.38
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
181 TestImageBuild/serial/Setup 31
182 TestImageBuild/serial/NormalBuild 2.13
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.91
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
189 TestJSONOutput/start/Command 40.65
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.63
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.57
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.97
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 32.98
215 TestKicCustomNetwork/use_default_bridge_network 33.02
216 TestKicExistingNetwork 31.55
217 TestKicCustomSubnet 33.96
218 TestKicStaticIP 33.89
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 68.26
223 TestMountStart/serial/StartWithMountFirst 10.35
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 7.93
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.46
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 8.39
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 84.5
235 TestMultiNode/serial/DeployApp2Nodes 43.22
236 TestMultiNode/serial/PingHostFrom2Pods 1.01
237 TestMultiNode/serial/AddNode 18.74
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.69
240 TestMultiNode/serial/CopyFile 10.61
241 TestMultiNode/serial/StopNode 2.3
242 TestMultiNode/serial/StartAfterStop 10.81
243 TestMultiNode/serial/RestartKeepsNodes 106.09
244 TestMultiNode/serial/DeleteNode 5.74
245 TestMultiNode/serial/StopMultiNode 21.55
246 TestMultiNode/serial/RestartMultiNode 56.05
247 TestMultiNode/serial/ValidateNameConflict 34.55
252 TestPreload 140.04
254 TestScheduledStopUnix 106.15
255 TestSkaffold 118.3
257 TestInsufficientStorage 11.36
258 TestRunningBinaryUpgrade 96.16
260 TestKubernetesUpgrade 138.53
261 TestMissingContainerUpgrade 121.07
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 44.29
265 TestNoKubernetes/serial/StartWithStopK8s 18.22
266 TestNoKubernetes/serial/Start 6.99
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
268 TestNoKubernetes/serial/ProfileList 1.14
269 TestNoKubernetes/serial/Stop 1.24
270 TestNoKubernetes/serial/StartNoArgs 8.66
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
283 TestStoppedBinaryUpgrade/Setup 0.67
284 TestStoppedBinaryUpgrade/Upgrade 133.27
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
294 TestPause/serial/Start 90.02
295 TestNetworkPlugins/group/auto/Start 53.66
296 TestNetworkPlugins/group/auto/KubeletFlags 0.31
297 TestNetworkPlugins/group/auto/NetCatPod 11.32
298 TestNetworkPlugins/group/auto/DNS 0.24
299 TestNetworkPlugins/group/auto/Localhost 0.18
300 TestNetworkPlugins/group/auto/HairPin 0.18
301 TestPause/serial/SecondStartNoReconfiguration 37.61
302 TestNetworkPlugins/group/kindnet/Start 76.19
303 TestPause/serial/Pause 0.84
304 TestPause/serial/VerifyStatus 0.55
305 TestPause/serial/Unpause 0.82
306 TestPause/serial/PauseAgain 1.04
307 TestPause/serial/DeletePaused 2.51
308 TestPause/serial/VerifyDeletedResources 2.97
309 TestNetworkPlugins/group/calico/Start 78.8
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
312 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
313 TestNetworkPlugins/group/kindnet/DNS 0.39
314 TestNetworkPlugins/group/kindnet/Localhost 0.28
315 TestNetworkPlugins/group/kindnet/HairPin 0.28
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/custom-flannel/Start 62.55
318 TestNetworkPlugins/group/calico/KubeletFlags 0.35
319 TestNetworkPlugins/group/calico/NetCatPod 14.33
320 TestNetworkPlugins/group/calico/DNS 0.23
321 TestNetworkPlugins/group/calico/Localhost 0.23
322 TestNetworkPlugins/group/calico/HairPin 0.25
323 TestNetworkPlugins/group/false/Start 80.7
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
326 TestNetworkPlugins/group/custom-flannel/DNS 0.33
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
329 TestNetworkPlugins/group/enable-default-cni/Start 74.04
330 TestNetworkPlugins/group/false/KubeletFlags 0.43
331 TestNetworkPlugins/group/false/NetCatPod 11.44
332 TestNetworkPlugins/group/false/DNS 0.25
333 TestNetworkPlugins/group/false/Localhost 0.21
334 TestNetworkPlugins/group/false/HairPin 0.24
335 TestNetworkPlugins/group/flannel/Start 56.27
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.33
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.28
341 TestNetworkPlugins/group/bridge/Start 51.35
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
344 TestNetworkPlugins/group/flannel/NetCatPod 11.32
345 TestNetworkPlugins/group/flannel/DNS 0.29
346 TestNetworkPlugins/group/flannel/Localhost 0.2
347 TestNetworkPlugins/group/flannel/HairPin 0.22
348 TestNetworkPlugins/group/kubenet/Start 61.73
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
350 TestNetworkPlugins/group/bridge/NetCatPod 11.52
351 TestNetworkPlugins/group/bridge/DNS 0.26
352 TestNetworkPlugins/group/bridge/Localhost 0.21
353 TestNetworkPlugins/group/bridge/HairPin 0.19
355 TestStartStop/group/old-k8s-version/serial/FirstStart 145.87
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
357 TestNetworkPlugins/group/kubenet/NetCatPod 11.26
358 TestNetworkPlugins/group/kubenet/DNS 0.23
359 TestNetworkPlugins/group/kubenet/Localhost 0.2
360 TestNetworkPlugins/group/kubenet/HairPin 0.18
362 TestStartStop/group/embed-certs/serial/FirstStart 46.65
363 TestStartStop/group/embed-certs/serial/DeployApp 9.38
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
365 TestStartStop/group/embed-certs/serial/Stop 11.03
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/embed-certs/serial/SecondStart 268.04
368 TestStartStop/group/old-k8s-version/serial/DeployApp 10.62
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
370 TestStartStop/group/old-k8s-version/serial/Stop 10.94
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
372 TestStartStop/group/old-k8s-version/serial/SecondStart 141.26
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
376 TestStartStop/group/old-k8s-version/serial/Pause 2.74
378 TestStartStop/group/no-preload/serial/FirstStart 50.09
379 TestStartStop/group/no-preload/serial/DeployApp 10.36
380 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
381 TestStartStop/group/no-preload/serial/Stop 10.9
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
385 TestStartStop/group/no-preload/serial/SecondStart 269.83
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
387 TestStartStop/group/embed-certs/serial/Pause 3.67
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.28
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.04
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.72
395 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
397 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
398 TestStartStop/group/no-preload/serial/Pause 3
400 TestStartStop/group/newest-cni/serial/FirstStart 38.53
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
403 TestStartStop/group/newest-cni/serial/Stop 5.74
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
405 TestStartStop/group/newest-cni/serial/SecondStart 21.15
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
409 TestStartStop/group/newest-cni/serial/Pause 3.97
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
x
+
TestDownloadOnly/v1.20.0/json-events (7.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-266640 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-266640 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.816773108s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 18:39:07.672791  738020 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0919 18:39:07.672877  738020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-266640
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-266640: exit status 85 (73.813244ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-266640 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |          |
	|         | -p download-only-266640        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:59.896545  738025 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:59.896724  738025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:59.896734  738025 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:59.896739  738025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:59.896979  738025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	W0919 18:38:59.897118  738025 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-732615/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-732615/.minikube/config/config.json: no such file or directory
	I0919 18:38:59.897523  738025 out.go:352] Setting JSON to true
	I0919 18:38:59.898414  738025 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12080,"bootTime":1726759060,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0919 18:38:59.898489  738025 start.go:139] virtualization:  
	I0919 18:38:59.901178  738025 out.go:97] [download-only-266640] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0919 18:38:59.901353  738025 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:38:59.901392  738025 notify.go:220] Checking for updates...
	I0919 18:38:59.902971  738025 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:38:59.904629  738025 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:59.906102  738025 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:38:59.907851  738025 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	I0919 18:38:59.909291  738025 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 18:38:59.912735  738025 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:38:59.912996  738025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:59.935714  738025 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:59.935830  738025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:00.009310  738025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-19 18:38:59.988501992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:00.009459  738025 docker.go:318] overlay module found
	I0919 18:39:00.013129  738025 out.go:97] Using the docker driver based on user configuration
	I0919 18:39:00.013174  738025 start.go:297] selected driver: docker
	I0919 18:39:00.013183  738025 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:00.013315  738025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:00.230447  738025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-19 18:39:00.207690769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:00.230705  738025 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:00.231043  738025 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0919 18:39:00.231271  738025 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:00.233614  738025 out.go:169] Using Docker driver with root privileges
	I0919 18:39:00.235474  738025 cni.go:84] Creating CNI manager for ""
	I0919 18:39:00.235579  738025 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 18:39:00.235688  738025 start.go:340] cluster config:
	{Name:download-only-266640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-266640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:00.237546  738025 out.go:97] Starting "download-only-266640" primary control-plane node in "download-only-266640" cluster
	I0919 18:39:00.237592  738025 cache.go:121] Beginning downloading kic base image for docker with docker
	I0919 18:39:00.240418  738025 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:00.240480  738025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 18:39:00.240570  738025 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:00.276028  738025 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:00.276236  738025 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:00.276387  738025 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:00.302777  738025 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0919 18:39:00.302810  738025 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:00.303551  738025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0919 18:39:00.305704  738025 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0919 18:39:00.305741  738025 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 18:39:00.415423  738025 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-266640 host does not exist
	  To start a cluster, run: "minikube start -p download-only-266640"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-266640
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-736260 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-736260 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.354824477s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 18:39:15.444926  738020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 18:39:15.444969  738020 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-736260
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-736260: exit status 85 (69.603768ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-266640 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-266640        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-266640        | download-only-266640 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only        | download-only-736260 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-736260        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:08.135159  738238 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:08.135337  738238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:08.135346  738238 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:08.135352  738238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:08.135587  738238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 18:39:08.136000  738238 out.go:352] Setting JSON to true
	I0919 18:39:08.136853  738238 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12089,"bootTime":1726759060,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0919 18:39:08.136928  738238 start.go:139] virtualization:  
	I0919 18:39:08.139997  738238 out.go:97] [download-only-736260] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:39:08.140247  738238 notify.go:220] Checking for updates...
	I0919 18:39:08.142338  738238 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:08.144298  738238 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:08.145928  738238 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:39:08.148215  738238 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	I0919 18:39:08.150091  738238 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 18:39:08.153861  738238 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:39:08.154114  738238 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:08.177300  738238 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:08.177409  738238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:08.237989  738238 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:08.228343553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:08.238116  738238 docker.go:318] overlay module found
	I0919 18:39:08.240584  738238 out.go:97] Using the docker driver based on user configuration
	I0919 18:39:08.240618  738238 start.go:297] selected driver: docker
	I0919 18:39:08.240626  738238 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:08.240745  738238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:08.302147  738238 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:08.292784217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:08.302361  738238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:08.302646  738238 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0919 18:39:08.302808  738238 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:08.305038  738238 out.go:169] Using Docker driver with root privileges
	I0919 18:39:08.307096  738238 cni.go:84] Creating CNI manager for ""
	I0919 18:39:08.307174  738238 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:39:08.307296  738238 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:08.307396  738238 start.go:340] cluster config:
	{Name:download-only-736260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-736260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:08.309195  738238 out.go:97] Starting "download-only-736260" primary control-plane node in "download-only-736260" cluster
	I0919 18:39:08.309212  738238 cache.go:121] Beginning downloading kic base image for docker with docker
	I0919 18:39:08.310959  738238 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:08.310986  738238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:08.311154  738238 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:08.326254  738238 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:08.326395  738238 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:08.326417  738238 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:08.326423  738238 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:08.326430  738238 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:08.369227  738238 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0919 18:39:08.369257  738238 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:08.369420  738238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:08.371376  738238 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0919 18:39:08.371406  738238 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0919 18:39:08.467810  738238 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19664-732615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-736260 host does not exist
	  To start a cluster, run: "minikube start -p download-only-736260"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-736260
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:39:16.663892  738020 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-490984 --alsologtostderr --binary-mirror http://127.0.0.1:42445 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-490984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-490984
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (88.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-856992 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-856992 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m26.556683982s)
helpers_test.go:175: Cleaning up "offline-docker-856992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-856992
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-856992: (2.380017477s)
--- PASS: TestOffline (88.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-810228
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-810228: exit status 85 (63.015752ms)

                                                
                                                
-- stdout --
	* Profile "addons-810228" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-810228"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-810228
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-810228: exit status 85 (75.313537ms)

                                                
                                                
-- stdout --
	* Profile "addons-810228" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-810228"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (220.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-810228 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-810228 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m40.235900056s)
--- PASS: TestAddons/Setup (220.24s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 58.70285ms
addons_test.go:897: volcano-scheduler stabilized in 58.813505ms
addons_test.go:905: volcano-admission stabilized in 58.876726ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-bgtn9" [dd95405d-23fd-4d5b-966b-4c71c1ffa99d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003248621s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gggc8" [122669bb-4392-45f7-8c05-5bf3a5b4cce1] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003276835s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-77slc" [e930d140-ddac-469a-b3bd-953872c2555d] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002844501s
addons_test.go:932: (dbg) Run:  kubectl --context addons-810228 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-810228 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-810228 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b366e608-1820-41eb-8841-a65e0533bd3c] Pending
helpers_test.go:344: "test-job-nginx-0" [b366e608-1820-41eb-8841-a65e0533bd3c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b366e608-1820-41eb-8841-a65e0533bd3c] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003736175s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable volcano --alsologtostderr -v=1: (10.585263166s)
--- PASS: TestAddons/serial/Volcano (41.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-810228 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-810228 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-810228 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-810228 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-810228 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [31a262ae-c63f-46b8-8a59-408358a50b6b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [31a262ae-c63f-46b8-8a59-408358a50b6b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002949729s
I0919 18:53:47.133194  738020 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-810228 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable ingress-dns --alsologtostderr -v=1: (1.084877799s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable ingress --alsologtostderr -v=1: (7.680243082s)
--- PASS: TestAddons/parallel/Ingress (19.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d957r" [cc4d6f59-bc2b-42f7-b04b-99ee1cbf2fa5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003976488s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-810228
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-810228: (5.776712513s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.03537ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lxrnr" [29562cd5-ea79-4850-8f5a-af67127f08a7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003722288s
addons_test.go:417: (dbg) Run:  kubectl --context addons-810228 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 18:51:54.225601  738020 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 18:51:54.231291  738020 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:51:54.231333  738020 kapi.go:107] duration metric: took 9.567353ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.58081ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-810228 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-810228 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d07ca5bb-fe0f-4dcb-b3a7-e7e0ace686fc] Pending
helpers_test.go:344: "task-pv-pod" [d07ca5bb-fe0f-4dcb-b3a7-e7e0ace686fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d07ca5bb-fe0f-4dcb-b3a7-e7e0ace686fc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004614747s
addons_test.go:590: (dbg) Run:  kubectl --context addons-810228 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-810228 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-810228 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-810228 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-810228 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-810228 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-810228 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ad823d7a-7f15-497d-80db-3aa38357caae] Pending
helpers_test.go:344: "task-pv-pod-restore" [ad823d7a-7f15-497d-80db-3aa38357caae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ad823d7a-7f15-497d-80db-3aa38357caae] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004075778s
addons_test.go:632: (dbg) Run:  kubectl --context addons-810228 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-810228 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-810228 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86588636s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-810228 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-jd72m" [43a3284a-d095-48bf-9321-aa1d2edfc9b4] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-jd72m" [43a3284a-d095-48bf-9321-aa1d2edfc9b4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-jd72m" [43a3284a-d095-48bf-9321-aa1d2edfc9b4] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003374601s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable headlamp --alsologtostderr -v=1: (5.869504242s)
--- PASS: TestAddons/parallel/Headlamp (17.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-dvfn8" [dd5e14e3-d232-47e0-8358-1a1a1fc41921] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004096895s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-810228
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.67s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-810228 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-810228 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7a4da2b6-29be-4688-b9c0-3b2a9a136795] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7a4da2b6-29be-4688-b9c0-3b2a9a136795] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7a4da2b6-29be-4688-b9c0-3b2a9a136795] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003198092s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-810228 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 ssh "cat /opt/local-path-provisioner/pvc-4341cf70-6fd2-4a6e-bbb3-41f9710dd0f7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-810228 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-810228 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.474136002s)
--- PASS: TestAddons/parallel/LocalPath (51.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gtn9v" [5894bc48-dcaa-4555-a48a-4648d0d4a6c4] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006261436s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-810228
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rqcmd" [d7f8b86e-6b32-4b3c-ba23-bcd16f271306] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003643139s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-810228 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-810228 addons disable yakd --alsologtostderr -v=1: (5.725666702s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-810228
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-810228: (5.733077975s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-810228
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-810228
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-810228
--- PASS: TestAddons/StoppedEnableDisable (5.99s)

                                                
                                    
x
+
TestCertOptions (38.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-858097 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-858097 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.937431698s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-858097 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-858097 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-858097 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-858097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-858097
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-858097: (2.097082904s)
--- PASS: TestCertOptions (38.72s)

                                                
                                    
x
+
TestCertExpiration (267.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-138861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0919 19:31:00.648547  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-138861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (41.918488047s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-138861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-138861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (43.053290081s)
helpers_test.go:175: Cleaning up "cert-expiration-138861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-138861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-138861: (2.082503921s)
--- PASS: TestCertExpiration (267.06s)

                                                
                                    
x
+
TestDockerFlags (43.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-197628 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-197628 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.046524093s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-197628 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-197628 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-197628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-197628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-197628: (2.41154647s)
--- PASS: TestDockerFlags (43.23s)

                                                
                                    
x
+
TestForceSystemdFlag (59.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-792492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-792492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (56.175493876s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-792492 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-792492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-792492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-792492: (2.399146185s)
--- PASS: TestForceSystemdFlag (59.12s)

                                                
                                    
x
+
TestForceSystemdEnv (41.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-236267 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-236267 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.56648417s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-236267 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-236267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-236267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-236267: (2.137208055s)
--- PASS: TestForceSystemdEnv (41.07s)

                                                
                                    
x
+
TestErrorSpam/setup (33.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-285330 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-285330 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-285330 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-285330 --driver=docker  --container-runtime=docker: (33.049842718s)
--- PASS: TestErrorSpam/setup (33.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 stop: (2.011394925s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285330 --log_dir /tmp/nospam-285330 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-732615/.minikube/files/etc/test/nested/copy/738020/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-273009 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.477079211s)
--- PASS: TestFunctional/serial/StartWithProxy (66.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 18:55:53.968775  738020 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-273009 --alsologtostderr -v=8: (31.199395467s)
functional_test.go:663: soft start took 31.201432888s for "functional-273009" cluster.
I0919 18:56:25.168567  738020 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (31.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-273009 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 cache add registry.k8s.io/pause:3.1: (1.225019712s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 cache add registry.k8s.io/pause:3.3: (1.0767063s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-273009 /tmp/TestFunctionalserialCacheCmdcacheadd_local805757281/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache add minikube-local-cache-test:functional-273009
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache delete minikube-local-cache-test:functional-273009
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-273009
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.873014ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 kubectl -- --context functional-273009 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-273009 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-273009 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.319027534s)
functional_test.go:761: restart took 39.319155797s for "functional-273009" cluster.
I0919 18:57:11.319908  738020 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-273009 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 logs: (1.129750515s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 logs --file /tmp/TestFunctionalserialLogsFileCmd3691317779/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 logs --file /tmp/TestFunctionalserialLogsFileCmd3691317779/001/logs.txt: (1.151445671s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-273009 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-273009
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-273009: exit status 115 (629.111273ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31456 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-273009 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 config get cpus: exit status 14 (74.196677ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 config get cpus: exit status 14 (72.880019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-273009 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-273009 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 779112: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-273009 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (274.781452ms)

                                                
                                                
-- stdout --
	* [functional-273009] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:57:53.289771  778774 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:57:53.289976  778774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:57:53.289989  778774 out.go:358] Setting ErrFile to fd 2...
	I0919 18:57:53.289995  778774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:57:53.290266  778774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 18:57:53.290684  778774 out.go:352] Setting JSON to false
	I0919 18:57:53.291817  778774 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13214,"bootTime":1726759060,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0919 18:57:53.291897  778774 start.go:139] virtualization:  
	I0919 18:57:53.295626  778774 out.go:177] * [functional-273009] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:57:53.297983  778774 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:57:53.298174  778774 notify.go:220] Checking for updates...
	I0919 18:57:53.303272  778774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:57:53.305705  778774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:57:53.308144  778774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	I0919 18:57:53.310764  778774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:57:53.313684  778774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:57:53.316773  778774 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:57:53.317296  778774 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:57:53.361278  778774 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:57:53.361447  778774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:57:53.474239  778774 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-19 18:57:53.46215941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:57:53.474342  778774 docker.go:318] overlay module found
	I0919 18:57:53.477977  778774 out.go:177] * Using the docker driver based on existing profile
	I0919 18:57:53.480543  778774 start.go:297] selected driver: docker
	I0919 18:57:53.480563  778774 start.go:901] validating driver "docker" against &{Name:functional-273009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-273009 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:57:53.480669  778774 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:57:53.483167  778774 out.go:201] 
	W0919 18:57:53.486079  778774 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 18:57:53.489128  778774 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-273009 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-273009 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (239.192323ms)

                                                
                                                
-- stdout --
	* [functional-273009] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:57:53.046234  778728 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:57:53.046407  778728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:57:53.046433  778728 out.go:358] Setting ErrFile to fd 2...
	I0919 18:57:53.046450  778728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:57:53.047820  778728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 18:57:53.048310  778728 out.go:352] Setting JSON to false
	I0919 18:57:53.049528  778728 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13213,"bootTime":1726759060,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0919 18:57:53.049614  778728 start.go:139] virtualization:  
	I0919 18:57:53.052886  778728 out.go:177] * [functional-273009] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0919 18:57:53.055383  778728 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:57:53.055467  778728 notify.go:220] Checking for updates...
	I0919 18:57:53.061416  778728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:57:53.063552  778728 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	I0919 18:57:53.065489  778728 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	I0919 18:57:53.067683  778728 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:57:53.070182  778728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:57:53.072854  778728 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:57:53.073399  778728 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:57:53.108108  778728 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:57:53.108244  778728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:57:53.192791  778728 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-19 18:57:53.18043169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:57:53.193058  778728 docker.go:318] overlay module found
	I0919 18:57:53.198798  778728 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0919 18:57:53.202894  778728 start.go:297] selected driver: docker
	I0919 18:57:53.202916  778728 start.go:901] validating driver "docker" against &{Name:functional-273009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-273009 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:57:53.203021  778728 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:57:53.205889  778728 out.go:201] 
	W0919 18:57:53.210036  778728 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 18:57:53.213351  778728 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-273009 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-273009 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zsvbg" [0b29f30d-f4df-4fee-be29-27d0a0666e04] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-zsvbg" [0b29f30d-f4df-4fee-be29-27d0a0666e04] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003965686s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30299
functional_test.go:1675: http://192.168.49.2:30299: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zsvbg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30299
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [36cfd581-2019-4e17-85bb-70a143017dc9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003654321s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-273009 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-273009 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-273009 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-273009 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [81a920a4-fd58-4080-9e79-023a0c6e3f65] Pending
helpers_test.go:344: "sp-pod" [81a920a4-fd58-4080-9e79-023a0c6e3f65] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [81a920a4-fd58-4080-9e79-023a0c6e3f65] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004317752s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-273009 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-273009 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-273009 delete -f testdata/storage-provisioner/pod.yaml: (1.356832878s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-273009 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [20fb973b-c146-49c7-bf2b-f9d85816663c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [20fb973b-c146-49c7-bf2b-f9d85816663c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003898932s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-273009 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh -n functional-273009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cp functional-273009:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2496392598/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh -n functional-273009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh -n functional-273009 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/738020/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /etc/test/nested/copy/738020/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/738020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /etc/ssl/certs/738020.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/738020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /usr/share/ca-certificates/738020.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7380202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /etc/ssl/certs/7380202.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7380202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /usr/share/ca-certificates/7380202.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-273009 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh "sudo systemctl is-active crio": exit status 1 (400.690174ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 775999: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-273009 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [244ce4c0-fd99-4757-a346-ad684e2fe6da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [244ce4c0-fd99-4757-a346-ad684e2fe6da] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004360212s
I0919 18:57:29.503040  738020 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-273009 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.87.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-273009 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-273009 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-273009 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-d86f7" [e2985fac-cde2-47bd-90e0-93d1fbce3fae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-d86f7" [e2985fac-cde2-47bd-90e0-93d1fbce3fae] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.010799895s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "494.558266ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "61.10557ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service list -o json
functional_test.go:1494: Took "634.358863ms" to run "out/minikube-linux-arm64 -p functional-273009 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "447.118992ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "98.64969ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32483
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdany-port466303009/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772270604546161" to /tmp/TestFunctionalparallelMountCmdany-port466303009/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772270604546161" to /tmp/TestFunctionalparallelMountCmdany-port466303009/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772270604546161" to /tmp/TestFunctionalparallelMountCmdany-port466303009/001/test-1726772270604546161
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (448.27634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:57:51.053705  738020 retry.go:31] will retry after 480.861423ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 18:57 test-1726772270604546161
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh cat /mount-9p/test-1726772270604546161
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-273009 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a5d61803-02b0-4520-9afd-7d00b5c8e028] Pending
helpers_test.go:344: "busybox-mount" [a5d61803-02b0-4520-9afd-7d00b5c8e028] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a5d61803-02b0-4520-9afd-7d00b5c8e028] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0919 18:57:57.568438  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:57.575086  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:57.586352  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:57.607643  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:57.649087  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:57.731018  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [a5d61803-02b0-4520-9afd-7d00b5c8e028] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00446669s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-273009 logs busybox-mount
E0919 18:57:57.892490  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh stat /mount-9p/created-by-test
E0919 18:57:58.214453  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo umount -f /mount-9p"
E0919 18:57:58.856399  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdany-port466303009/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32483
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdspecific-port2612645441/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (532.001017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:57:59.591768  738020 retry.go:31] will retry after 425.204279ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T /mount-9p | grep 9p"
E0919 18:58:00.149217  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdspecific-port2612645441/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh "sudo umount -f /mount-9p": exit status 1 (335.949415ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-273009 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdspecific-port2612645441/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T" /mount1: exit status 1 (927.684767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:58:02.467287  738020 retry.go:31] will retry after 682.38815ms: exit status 1
E0919 18:58:02.712321  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-273009 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-273009 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559439263/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 version -o=json --components: (1.20681181s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-273009 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-273009
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-273009
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-273009 image ls --format short --alsologtostderr:
I0919 18:58:12.109300  781831 out.go:345] Setting OutFile to fd 1 ...
I0919 18:58:12.109527  781831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.109556  781831 out.go:358] Setting ErrFile to fd 2...
I0919 18:58:12.109576  781831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.110009  781831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
I0919 18:58:12.111252  781831 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.111826  781831 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.112785  781831 cli_runner.go:164] Run: docker container inspect functional-273009 --format={{.State.Status}}
I0919 18:58:12.142584  781831 ssh_runner.go:195] Run: systemctl --version
I0919 18:58:12.142640  781831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-273009
I0919 18:58:12.172943  781831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/functional-273009/id_rsa Username:docker}
I0919 18:58:12.272090  781831 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-273009 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/minikube-local-cache-test | functional-273009 | 521707f2b6f5d | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-273009 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-273009 image ls --format table --alsologtostderr:
I0919 18:58:13.535067  782189 out.go:345] Setting OutFile to fd 1 ...
I0919 18:58:13.535302  782189 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:13.535332  782189 out.go:358] Setting ErrFile to fd 2...
I0919 18:58:13.535353  782189 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:13.535620  782189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
I0919 18:58:13.536359  782189 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:13.536529  782189 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:13.537127  782189 cli_runner.go:164] Run: docker container inspect functional-273009 --format={{.State.Status}}
I0919 18:58:13.554959  782189 ssh_runner.go:195] Run: systemctl --version
I0919 18:58:13.555028  782189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-273009
I0919 18:58:13.572070  782189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/functional-273009/id_rsa Username:docker}
I0919 18:58:13.673239  782189 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-273009 image ls --format json --alsologtostderr:
[{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDi
gests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"521707f2b6f5d6fd795411d7a162a333f160b06da246b7629e7869ee1a734eaa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-273009"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],
"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-273009"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"
size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-273009 image ls --format json --alsologtostderr:
I0919 18:58:13.301445  782142 out.go:345] Setting OutFile to fd 1 ...
I0919 18:58:13.301722  782142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:13.301749  782142 out.go:358] Setting ErrFile to fd 2...
I0919 18:58:13.301757  782142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:13.302035  782142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
I0919 18:58:13.302750  782142 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:13.302917  782142 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:13.303515  782142 cli_runner.go:164] Run: docker container inspect functional-273009 --format={{.State.Status}}
I0919 18:58:13.322923  782142 ssh_runner.go:195] Run: systemctl --version
I0919 18:58:13.322976  782142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-273009
I0919 18:58:13.352752  782142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/functional-273009/id_rsa Username:docker}
I0919 18:58:13.455838  782142 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-273009 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 521707f2b6f5d6fd795411d7a162a333f160b06da246b7629e7869ee1a734eaa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-273009
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-273009
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-273009 image ls --format yaml --alsologtostderr:
I0919 18:58:12.986515  782060 out.go:345] Setting OutFile to fd 1 ...
I0919 18:58:12.986717  782060 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.986748  782060 out.go:358] Setting ErrFile to fd 2...
I0919 18:58:12.986768  782060 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.987029  782060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
I0919 18:58:12.987757  782060 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.987942  782060 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.988467  782060 cli_runner.go:164] Run: docker container inspect functional-273009 --format={{.State.Status}}
I0919 18:58:13.019971  782060 ssh_runner.go:195] Run: systemctl --version
I0919 18:58:13.020020  782060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-273009
I0919 18:58:13.050494  782060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/functional-273009/id_rsa Username:docker}
I0919 18:58:13.163551  782060 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-273009 ssh pgrep buildkitd: exit status 1 (323.092184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image build -t localhost/my-image:functional-273009 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-273009 image build -t localhost/my-image:functional-273009 testdata/build --alsologtostderr: (2.837483736s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-273009 image build -t localhost/my-image:functional-273009 testdata/build --alsologtostderr:
I0919 18:58:12.844981  782017 out.go:345] Setting OutFile to fd 1 ...
I0919 18:58:12.845722  782017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.845742  782017 out.go:358] Setting ErrFile to fd 2...
I0919 18:58:12.845750  782017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:58:12.846014  782017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
I0919 18:58:12.846701  782017 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.847787  782017 config.go:182] Loaded profile config "functional-273009": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:58:12.848288  782017 cli_runner.go:164] Run: docker container inspect functional-273009 --format={{.State.Status}}
I0919 18:58:12.868162  782017 ssh_runner.go:195] Run: systemctl --version
I0919 18:58:12.868275  782017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-273009
I0919 18:58:12.887327  782017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/functional-273009/id_rsa Username:docker}
I0919 18:58:12.999949  782017 build_images.go:161] Building image from path: /tmp/build.758861779.tar
I0919 18:58:13.000039  782017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 18:58:13.013736  782017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.758861779.tar
I0919 18:58:13.018057  782017 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.758861779.tar: stat -c "%s %y" /var/lib/minikube/build/build.758861779.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.758861779.tar': No such file or directory
I0919 18:58:13.018086  782017 ssh_runner.go:362] scp /tmp/build.758861779.tar --> /var/lib/minikube/build/build.758861779.tar (3072 bytes)
I0919 18:58:13.057038  782017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.758861779
I0919 18:58:13.068873  782017 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.758861779 -xf /var/lib/minikube/build/build.758861779.tar
I0919 18:58:13.080153  782017 docker.go:360] Building image: /var/lib/minikube/build/build.758861779
I0919 18:58:13.080231  782017 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-273009 /var/lib/minikube/build/build.758861779
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e0cb62f97234c1c8c0cb577e3b986d63f70f0001f3c87d19fd7907410b4f9774 done
#8 naming to localhost/my-image:functional-273009 done
#8 DONE 0.1s
I0919 18:58:15.603161  782017 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-273009 /var/lib/minikube/build/build.758861779: (2.52290521s)
I0919 18:58:15.603256  782017 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.758861779
I0919 18:58:15.612616  782017 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.758861779.tar
I0919 18:58:15.622532  782017 build_images.go:217] Built localhost/my-image:functional-273009 from /tmp/build.758861779.tar
I0919 18:58:15.622582  782017 build_images.go:133] succeeded building to: functional-273009
I0919 18:58:15.622589  782017 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-273009
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image load --daemon kicbase/echo-server:functional-273009 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image load --daemon kicbase/echo-server:functional-273009 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0919 18:58:07.834431  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-273009
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image load --daemon kicbase/echo-server:functional-273009 --alsologtostderr
2024/09/19 18:58:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image save kicbase/echo-server:functional-273009 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image rm kicbase/echo-server:functional-273009 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-273009 docker-env) && out/minikube-linux-arm64 status -p functional-273009"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-273009 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-273009
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 image save --daemon kicbase/echo-server:functional-273009 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-273009
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-273009 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-273009
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-273009
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-273009
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (130.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-662592 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0919 18:58:38.558305  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:59:19.520381  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-662592 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m9.406769716s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (130.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-662592 -- rollout status deployment/busybox: (5.003113528s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:34.156859  738020 retry.go:31] will retry after 673.642657ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:35.022995  738020 retry.go:31] will retry after 1.11457373s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:36.330226  738020 retry.go:31] will retry after 2.718479904s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:39.240180  738020 retry.go:31] will retry after 3.737572145s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0919 19:00:41.442644  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:43.174305  738020 retry.go:31] will retry after 3.338558783s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:46.666685  738020 retry.go:31] will retry after 8.035585022s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:00:54.878796  738020 retry.go:31] will retry after 5.814230182s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 19:01:00.862914  738020 retry.go:31] will retry after 10.405595012s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-hr8nz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-nr5l2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-p59k5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-hr8nz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-nr5l2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-p59k5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-hr8nz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-nr5l2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-p59k5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-hr8nz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-hr8nz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-nr5l2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-nr5l2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-p59k5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-662592 -- exec busybox-7dff88458-p59k5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-662592 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-662592 -v=7 --alsologtostderr: (27.144969014s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr: (1.089366916s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-662592 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.185091653s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 status --output json -v=7 --alsologtostderr: (1.118192914s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp testdata/cp-test.txt ha-662592:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1745910218/001/cp-test_ha-662592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592:/home/docker/cp-test.txt ha-662592-m02:/home/docker/cp-test_ha-662592_ha-662592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test_ha-662592_ha-662592-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592:/home/docker/cp-test.txt ha-662592-m03:/home/docker/cp-test_ha-662592_ha-662592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test_ha-662592_ha-662592-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592:/home/docker/cp-test.txt ha-662592-m04:/home/docker/cp-test_ha-662592_ha-662592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test_ha-662592_ha-662592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp testdata/cp-test.txt ha-662592-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1745910218/001/cp-test_ha-662592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m02:/home/docker/cp-test.txt ha-662592:/home/docker/cp-test_ha-662592-m02_ha-662592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test_ha-662592-m02_ha-662592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m02:/home/docker/cp-test.txt ha-662592-m03:/home/docker/cp-test_ha-662592-m02_ha-662592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test_ha-662592-m02_ha-662592-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m02:/home/docker/cp-test.txt ha-662592-m04:/home/docker/cp-test_ha-662592-m02_ha-662592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test_ha-662592-m02_ha-662592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp testdata/cp-test.txt ha-662592-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1745910218/001/cp-test_ha-662592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m03:/home/docker/cp-test.txt ha-662592:/home/docker/cp-test_ha-662592-m03_ha-662592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test_ha-662592-m03_ha-662592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m03:/home/docker/cp-test.txt ha-662592-m02:/home/docker/cp-test_ha-662592-m03_ha-662592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test_ha-662592-m03_ha-662592-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m03:/home/docker/cp-test.txt ha-662592-m04:/home/docker/cp-test_ha-662592-m03_ha-662592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test_ha-662592-m03_ha-662592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp testdata/cp-test.txt ha-662592-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1745910218/001/cp-test_ha-662592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m04:/home/docker/cp-test.txt ha-662592:/home/docker/cp-test_ha-662592-m04_ha-662592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592 "sudo cat /home/docker/cp-test_ha-662592-m04_ha-662592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m04:/home/docker/cp-test.txt ha-662592-m02:/home/docker/cp-test_ha-662592-m04_ha-662592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m02 "sudo cat /home/docker/cp-test_ha-662592-m04_ha-662592-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 cp ha-662592-m04:/home/docker/cp-test.txt ha-662592-m03:/home/docker/cp-test_ha-662592-m04_ha-662592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 ssh -n ha-662592-m03 "sudo cat /home/docker/cp-test_ha-662592-m04_ha-662592-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 node stop m02 -v=7 --alsologtostderr: (10.920945999s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr: exit status 7 (794.351002ms)

                                                
                                                
-- stdout --
	ha-662592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-662592-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-662592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-662592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:02:16.567636  804985 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:02:16.567878  804985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:16.567909  804985 out.go:358] Setting ErrFile to fd 2...
	I0919 19:02:16.567930  804985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:16.568203  804985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 19:02:16.568426  804985 out.go:352] Setting JSON to false
	I0919 19:02:16.568514  804985 mustload.go:65] Loading cluster: ha-662592
	I0919 19:02:16.568588  804985 notify.go:220] Checking for updates...
	I0919 19:02:16.569660  804985 config.go:182] Loaded profile config "ha-662592": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:02:16.569727  804985 status.go:174] checking status of ha-662592 ...
	I0919 19:02:16.570316  804985 cli_runner.go:164] Run: docker container inspect ha-662592 --format={{.State.Status}}
	I0919 19:02:16.593862  804985 status.go:364] ha-662592 host status = "Running" (err=<nil>)
	I0919 19:02:16.593885  804985 host.go:66] Checking if "ha-662592" exists ...
	I0919 19:02:16.594207  804985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-662592
	I0919 19:02:16.625028  804985 host.go:66] Checking if "ha-662592" exists ...
	I0919 19:02:16.625357  804985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:02:16.625405  804985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-662592
	I0919 19:02:16.655565  804985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/ha-662592/id_rsa Username:docker}
	I0919 19:02:16.756740  804985 ssh_runner.go:195] Run: systemctl --version
	I0919 19:02:16.761351  804985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:02:16.775603  804985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:02:16.840102  804985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-19 19:02:16.82923532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:02:16.840698  804985 kubeconfig.go:125] found "ha-662592" server: "https://192.168.49.254:8443"
	I0919 19:02:16.840733  804985 api_server.go:166] Checking apiserver status ...
	I0919 19:02:16.840783  804985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:02:16.853140  804985 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2222/cgroup
	I0919 19:02:16.862701  804985 api_server.go:182] apiserver freezer: "2:freezer:/docker/79c6d3ab144e4f04e8a67038afc47e5eed5349f417777e0c8af24bd175bd18ab/kubepods/burstable/pod522693802cc8d988667b4914b4986187/65fa15826135e6a8138749a311e5703bc5858e976e23c45e8da7b6f1a5121076"
	I0919 19:02:16.862771  804985 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/79c6d3ab144e4f04e8a67038afc47e5eed5349f417777e0c8af24bd175bd18ab/kubepods/burstable/pod522693802cc8d988667b4914b4986187/65fa15826135e6a8138749a311e5703bc5858e976e23c45e8da7b6f1a5121076/freezer.state
	I0919 19:02:16.872072  804985 api_server.go:204] freezer state: "THAWED"
	I0919 19:02:16.872104  804985 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:02:16.879828  804985 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:02:16.879859  804985 status.go:456] ha-662592 apiserver status = Running (err=<nil>)
	I0919 19:02:16.879869  804985 status.go:176] ha-662592 status: &{Name:ha-662592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:02:16.879885  804985 status.go:174] checking status of ha-662592-m02 ...
	I0919 19:02:16.880206  804985 cli_runner.go:164] Run: docker container inspect ha-662592-m02 --format={{.State.Status}}
	I0919 19:02:16.897996  804985 status.go:364] ha-662592-m02 host status = "Stopped" (err=<nil>)
	I0919 19:02:16.898020  804985 status.go:377] host is not running, skipping remaining checks
	I0919 19:02:16.898028  804985 status.go:176] ha-662592-m02 status: &{Name:ha-662592-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:02:16.898047  804985 status.go:174] checking status of ha-662592-m03 ...
	I0919 19:02:16.898353  804985 cli_runner.go:164] Run: docker container inspect ha-662592-m03 --format={{.State.Status}}
	I0919 19:02:16.917555  804985 status.go:364] ha-662592-m03 host status = "Running" (err=<nil>)
	I0919 19:02:16.917597  804985 host.go:66] Checking if "ha-662592-m03" exists ...
	I0919 19:02:16.917894  804985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-662592-m03
	I0919 19:02:16.935356  804985 host.go:66] Checking if "ha-662592-m03" exists ...
	I0919 19:02:16.935676  804985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:02:16.935714  804985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-662592-m03
	I0919 19:02:16.953294  804985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/ha-662592-m03/id_rsa Username:docker}
	I0919 19:02:17.056946  804985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:02:17.069410  804985 kubeconfig.go:125] found "ha-662592" server: "https://192.168.49.254:8443"
	I0919 19:02:17.069442  804985 api_server.go:166] Checking apiserver status ...
	I0919 19:02:17.069484  804985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:02:17.083947  804985 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2238/cgroup
	I0919 19:02:17.095749  804985 api_server.go:182] apiserver freezer: "2:freezer:/docker/39413147e8f85b0f450bad7772d7e27c1312f95a26f358bd0fb9959ac210a56c/kubepods/burstable/podb9f4988310e800e4c33acb8149369f37/d88eb5c54f5846529dcb61efcad7d50aa55c67610e15b109501f420714e1dfcf"
	I0919 19:02:17.095826  804985 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/39413147e8f85b0f450bad7772d7e27c1312f95a26f358bd0fb9959ac210a56c/kubepods/burstable/podb9f4988310e800e4c33acb8149369f37/d88eb5c54f5846529dcb61efcad7d50aa55c67610e15b109501f420714e1dfcf/freezer.state
	I0919 19:02:17.105323  804985 api_server.go:204] freezer state: "THAWED"
	I0919 19:02:17.105362  804985 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:02:17.113579  804985 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:02:17.113610  804985 status.go:456] ha-662592-m03 apiserver status = Running (err=<nil>)
	I0919 19:02:17.113620  804985 status.go:176] ha-662592-m03 status: &{Name:ha-662592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:02:17.113645  804985 status.go:174] checking status of ha-662592-m04 ...
	I0919 19:02:17.114014  804985 cli_runner.go:164] Run: docker container inspect ha-662592-m04 --format={{.State.Status}}
	I0919 19:02:17.136042  804985 status.go:364] ha-662592-m04 host status = "Running" (err=<nil>)
	I0919 19:02:17.136071  804985 host.go:66] Checking if "ha-662592-m04" exists ...
	I0919 19:02:17.136458  804985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-662592-m04
	I0919 19:02:17.152926  804985 host.go:66] Checking if "ha-662592-m04" exists ...
	I0919 19:02:17.153236  804985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:02:17.153282  804985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-662592-m04
	I0919 19:02:17.171347  804985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33563 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/ha-662592-m04/id_rsa Username:docker}
	I0919 19:02:17.269610  804985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:02:17.283143  804985 status.go:176] ha-662592-m04 status: &{Name:ha-662592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (70.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 node start m02 -v=7 --alsologtostderr
E0919 19:02:19.974916  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:19.981236  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:19.992596  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:20.016145  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:20.057529  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:20.139051  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:20.300585  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:20.622199  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:21.264104  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:22.545791  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:25.107505  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:30.228816  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:40.470068  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:57.568211  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:03:00.952291  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:03:25.284889  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 node start m02 -v=7 --alsologtostderr: (1m9.548208984s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr: (1.018140272s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (70.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019662326s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (159.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-662592 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-662592 -v=7 --alsologtostderr
E0919 19:03:41.915683  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-662592 -v=7 --alsologtostderr: (34.312900647s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-662592 --wait=true -v=7 --alsologtostderr
E0919 19:05:03.842039  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-662592 --wait=true -v=7 --alsologtostderr: (2m4.89316445s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-662592
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (159.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 node delete m03 -v=7 --alsologtostderr: (9.42835246s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 stop -v=7 --alsologtostderr: (32.836730804s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr: exit status 7 (107.83918ms)

                                                
                                                
-- stdout --
	ha-662592
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-662592-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-662592-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:06:53.230038  831317 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:06:53.230594  831317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:06:53.230608  831317 out.go:358] Setting ErrFile to fd 2...
	I0919 19:06:53.230615  831317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:06:53.230887  831317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 19:06:53.231094  831317 out.go:352] Setting JSON to false
	I0919 19:06:53.231138  831317 mustload.go:65] Loading cluster: ha-662592
	I0919 19:06:53.231610  831317 notify.go:220] Checking for updates...
	I0919 19:06:53.231624  831317 config.go:182] Loaded profile config "ha-662592": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:06:53.231764  831317 status.go:174] checking status of ha-662592 ...
	I0919 19:06:53.232320  831317 cli_runner.go:164] Run: docker container inspect ha-662592 --format={{.State.Status}}
	I0919 19:06:53.249033  831317 status.go:364] ha-662592 host status = "Stopped" (err=<nil>)
	I0919 19:06:53.249057  831317 status.go:377] host is not running, skipping remaining checks
	I0919 19:06:53.249064  831317 status.go:176] ha-662592 status: &{Name:ha-662592 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:06:53.249097  831317 status.go:174] checking status of ha-662592-m02 ...
	I0919 19:06:53.249404  831317 cli_runner.go:164] Run: docker container inspect ha-662592-m02 --format={{.State.Status}}
	I0919 19:06:53.268797  831317 status.go:364] ha-662592-m02 host status = "Stopped" (err=<nil>)
	I0919 19:06:53.268821  831317 status.go:377] host is not running, skipping remaining checks
	I0919 19:06:53.268828  831317 status.go:176] ha-662592-m02 status: &{Name:ha-662592-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:06:53.268846  831317 status.go:174] checking status of ha-662592-m04 ...
	I0919 19:06:53.269197  831317 cli_runner.go:164] Run: docker container inspect ha-662592-m04 --format={{.State.Status}}
	I0919 19:06:53.292066  831317 status.go:364] ha-662592-m04 host status = "Stopped" (err=<nil>)
	I0919 19:06:53.292088  831317 status.go:377] host is not running, skipping remaining checks
	I0919 19:06:53.292098  831317 status.go:176] ha-662592-m04 status: &{Name:ha-662592-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-662592 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0919 19:07:19.975573  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:07:47.683490  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:07:57.568248  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-662592 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m43.268924241s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-662592 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-662592 --control-plane -v=7 --alsologtostderr: (45.363582117s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-662592 status -v=7 --alsologtostderr: (1.017631628s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.052740407s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-147344 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-147344 --driver=docker  --container-runtime=docker: (31.00338272s)
--- PASS: TestImageBuild/serial/Setup (31.00s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-147344
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-147344: (2.133963701s)
--- PASS: TestImageBuild/serial/NormalBuild (2.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147344
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147344: (1.027434958s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-147344
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-147344
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-612405 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-612405 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.636851965s)
--- PASS: TestJSONOutput/start/Command (40.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-612405 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-612405 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-612405 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-612405 --output=json --user=testUser: (5.97461008s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-833649 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-833649 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.854373ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"38fa0c08-49b6-43fe-833b-a2e7b7391884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-833649] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f55860a2-ac37-477a-89bc-0494c8db3fa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"b7b53392-d20a-4b9d-a606-cce0a667dcde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc42d567-92d4-4615-9f7b-2af47a1d4f5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig"}}
	{"specversion":"1.0","id":"64836901-51bf-41e7-8232-3c7e6d7e8a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube"}}
	{"specversion":"1.0","id":"549d6c52-5a8c-48ef-aca8-89f1268403f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"baa716bf-3056-41c7-adb9-a285bbb3ef3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"83e1ead4-e877-4ea3-8c1c-630a708d169f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-833649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-833649
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-017394 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-017394 --network=: (30.909261428s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-017394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-017394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-017394: (2.045981214s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-082752 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-082752 --network=bridge: (30.951742731s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-082752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-082752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-082752: (2.039265986s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                    
x
+
TestKicExistingNetwork (31.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 19:12:08.261365  738020 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 19:12:08.277825  738020 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 19:12:08.277920  738020 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 19:12:08.277936  738020 cli_runner.go:164] Run: docker network inspect existing-network
W0919 19:12:08.295000  738020 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 19:12:08.295032  738020 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 19:12:08.295046  738020 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 19:12:08.295151  738020 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:12:08.312774  738020 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-82887955cde5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:11:d9:19:05} reservation:<nil>}
I0919 19:12:08.313141  738020 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b8f860}
I0919 19:12:08.313163  738020 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 19:12:08.313213  738020 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 19:12:08.387416  738020 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-438259 --network=existing-network
E0919 19:12:19.974802  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-438259 --network=existing-network: (29.384171158s)
helpers_test.go:175: Cleaning up "existing-network-438259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-438259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-438259: (2.010438405s)
I0919 19:12:39.798991  738020 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.55s)

                                                
                                    
x
+
TestKicCustomSubnet (33.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-504424 --subnet=192.168.60.0/24
E0919 19:12:57.568491  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-504424 --subnet=192.168.60.0/24: (31.791783637s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-504424 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-504424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-504424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-504424: (2.142386868s)
--- PASS: TestKicCustomSubnet (33.96s)

                                                
                                    
x
+
TestKicStaticIP (33.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-156172 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-156172 --static-ip=192.168.200.200: (31.650687402s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-156172 ip
helpers_test.go:175: Cleaning up "static-ip-156172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-156172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-156172: (2.084947217s)
--- PASS: TestKicStaticIP (33.89s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-760631 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-760631 --driver=docker  --container-runtime=docker: (29.510067122s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-763092 --driver=docker  --container-runtime=docker
E0919 19:14:20.646249  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-763092 --driver=docker  --container-runtime=docker: (33.214018247s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-760631
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-763092
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-763092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-763092
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-763092: (2.112445958s)
helpers_test.go:175: Cleaning up "first-760631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-760631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-760631: (2.070749163s)
--- PASS: TestMinikubeProfile (68.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-716148 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-716148 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.35468921s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-716148 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-718035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-718035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.930684892s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-718035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-716148 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-716148 --alsologtostderr -v=5: (1.460558913s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-718035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-718035
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-718035: (1.198947444s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-718035
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-718035: (7.386940691s)
--- PASS: TestMountStart/serial/RestartStopped (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-718035 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.876792651s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (43.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-303478 -- rollout status deployment/busybox: (3.775740792s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:16:56.649242  738020 retry.go:31] will retry after 683.109958ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:16:57.483578  738020 retry.go:31] will retry after 1.402089077s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:16:59.046736  738020 retry.go:31] will retry after 2.299490571s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:17:01.505154  738020 retry.go:31] will retry after 3.972495306s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:17:05.635156  738020 retry.go:31] will retry after 6.900159636s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:17:12.689884  738020 retry.go:31] will retry after 6.587711559s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:17:19.435665  738020 retry.go:31] will retry after 14.428750211s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0919 19:17:19.975286  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-hcjrj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-q46wk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-hcjrj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-q46wk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-hcjrj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-q46wk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (43.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-hcjrj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-hcjrj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-q46wk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303478 -- exec busybox-7dff88458-q46wk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-303478 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-303478 -v 3 --alsologtostderr: (17.950963831s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-303478 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp testdata/cp-test.txt multinode-303478:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile331753946/001/cp-test_multinode-303478.txt
E0919 19:17:57.567991  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478:/home/docker/cp-test.txt multinode-303478-m02:/home/docker/cp-test_multinode-303478_multinode-303478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test_multinode-303478_multinode-303478-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478:/home/docker/cp-test.txt multinode-303478-m03:/home/docker/cp-test_multinode-303478_multinode-303478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test_multinode-303478_multinode-303478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp testdata/cp-test.txt multinode-303478-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile331753946/001/cp-test_multinode-303478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m02:/home/docker/cp-test.txt multinode-303478:/home/docker/cp-test_multinode-303478-m02_multinode-303478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test_multinode-303478-m02_multinode-303478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m02:/home/docker/cp-test.txt multinode-303478-m03:/home/docker/cp-test_multinode-303478-m02_multinode-303478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test_multinode-303478-m02_multinode-303478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp testdata/cp-test.txt multinode-303478-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile331753946/001/cp-test_multinode-303478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m03:/home/docker/cp-test.txt multinode-303478:/home/docker/cp-test_multinode-303478-m03_multinode-303478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478 "sudo cat /home/docker/cp-test_multinode-303478-m03_multinode-303478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 cp multinode-303478-m03:/home/docker/cp-test.txt multinode-303478-m02:/home/docker/cp-test_multinode-303478-m03_multinode-303478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 ssh -n multinode-303478-m02 "sudo cat /home/docker/cp-test_multinode-303478-m03_multinode-303478-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-303478 node stop m03: (1.215439228s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303478 status: exit status 7 (529.732143ms)

                                                
                                                
-- stdout --
	multinode-303478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-303478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-303478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr: exit status 7 (551.886017ms)

                                                
                                                
-- stdout --
	multinode-303478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-303478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-303478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:18:08.629933  905908 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:18:08.630078  905908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:18:08.630086  905908 out.go:358] Setting ErrFile to fd 2...
	I0919 19:18:08.630092  905908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:18:08.630351  905908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 19:18:08.630531  905908 out.go:352] Setting JSON to false
	I0919 19:18:08.630567  905908 mustload.go:65] Loading cluster: multinode-303478
	I0919 19:18:08.630668  905908 notify.go:220] Checking for updates...
	I0919 19:18:08.631038  905908 config.go:182] Loaded profile config "multinode-303478": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:18:08.631069  905908 status.go:174] checking status of multinode-303478 ...
	I0919 19:18:08.631701  905908 cli_runner.go:164] Run: docker container inspect multinode-303478 --format={{.State.Status}}
	I0919 19:18:08.650174  905908 status.go:364] multinode-303478 host status = "Running" (err=<nil>)
	I0919 19:18:08.650201  905908 host.go:66] Checking if "multinode-303478" exists ...
	I0919 19:18:08.650507  905908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-303478
	I0919 19:18:08.671477  905908 host.go:66] Checking if "multinode-303478" exists ...
	I0919 19:18:08.671791  905908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:18:08.671843  905908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-303478
	I0919 19:18:08.697993  905908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33673 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/multinode-303478/id_rsa Username:docker}
	I0919 19:18:08.797162  905908 ssh_runner.go:195] Run: systemctl --version
	I0919 19:18:08.803100  905908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:18:08.825986  905908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:18:08.881298  905908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-19 19:18:08.870561181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:18:08.881933  905908 kubeconfig.go:125] found "multinode-303478" server: "https://192.168.67.2:8443"
	I0919 19:18:08.881974  905908 api_server.go:166] Checking apiserver status ...
	I0919 19:18:08.882026  905908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:18:08.894093  905908 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0919 19:18:08.903996  905908 api_server.go:182] apiserver freezer: "2:freezer:/docker/d2ea3261a939f6ea3d348f7383c14e68c2cd179852314d25d2f9a5800d18b988/kubepods/burstable/podbd1b0fa734d44ffe4696eb4367280dba/5fe0c0a87286d53a7b06be69808d576d29beb84ec5d6d512829274de1a13c892"
	I0919 19:18:08.904078  905908 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d2ea3261a939f6ea3d348f7383c14e68c2cd179852314d25d2f9a5800d18b988/kubepods/burstable/podbd1b0fa734d44ffe4696eb4367280dba/5fe0c0a87286d53a7b06be69808d576d29beb84ec5d6d512829274de1a13c892/freezer.state
	I0919 19:18:08.913637  905908 api_server.go:204] freezer state: "THAWED"
	I0919 19:18:08.913667  905908 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 19:18:08.921498  905908 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 19:18:08.921532  905908 status.go:456] multinode-303478 apiserver status = Running (err=<nil>)
	I0919 19:18:08.921544  905908 status.go:176] multinode-303478 status: &{Name:multinode-303478 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:18:08.921567  905908 status.go:174] checking status of multinode-303478-m02 ...
	I0919 19:18:08.921885  905908 cli_runner.go:164] Run: docker container inspect multinode-303478-m02 --format={{.State.Status}}
	I0919 19:18:08.940098  905908 status.go:364] multinode-303478-m02 host status = "Running" (err=<nil>)
	I0919 19:18:08.940127  905908 host.go:66] Checking if "multinode-303478-m02" exists ...
	I0919 19:18:08.940436  905908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-303478-m02
	I0919 19:18:08.961501  905908 host.go:66] Checking if "multinode-303478-m02" exists ...
	I0919 19:18:08.961862  905908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:18:08.961919  905908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-303478-m02
	I0919 19:18:08.982423  905908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33678 SSHKeyPath:/home/jenkins/minikube-integration/19664-732615/.minikube/machines/multinode-303478-m02/id_rsa Username:docker}
	I0919 19:18:09.084699  905908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:18:09.099415  905908 status.go:176] multinode-303478-m02 status: &{Name:multinode-303478-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:18:09.099462  905908 status.go:174] checking status of multinode-303478-m03 ...
	I0919 19:18:09.099860  905908 cli_runner.go:164] Run: docker container inspect multinode-303478-m03 --format={{.State.Status}}
	I0919 19:18:09.118299  905908 status.go:364] multinode-303478-m03 host status = "Stopped" (err=<nil>)
	I0919 19:18:09.118322  905908 status.go:377] host is not running, skipping remaining checks
	I0919 19:18:09.118329  905908 status.go:176] multinode-303478-m03 status: &{Name:multinode-303478-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-303478 node start m03 -v=7 --alsologtostderr: (10.043714443s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (106.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303478
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-303478
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-303478: (22.664857958s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303478 --wait=true -v=8 --alsologtostderr
E0919 19:18:43.045528  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303478 --wait=true -v=8 --alsologtostderr: (1m23.254130435s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303478
--- PASS: TestMultiNode/serial/RestartKeepsNodes (106.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-303478 node delete m03: (5.023731521s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-303478 stop: (21.366460398s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303478 status: exit status 7 (94.981999ms)

                                                
                                                
-- stdout --
	multinode-303478
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-303478-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr: exit status 7 (87.014836ms)

                                                
                                                
-- stdout --
	multinode-303478
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-303478-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:20:33.291640  919483 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:20:33.291842  919483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:20:33.291871  919483 out.go:358] Setting ErrFile to fd 2...
	I0919 19:20:33.291895  919483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:20:33.292151  919483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-732615/.minikube/bin
	I0919 19:20:33.292385  919483 out.go:352] Setting JSON to false
	I0919 19:20:33.292455  919483 mustload.go:65] Loading cluster: multinode-303478
	I0919 19:20:33.292538  919483 notify.go:220] Checking for updates...
	I0919 19:20:33.292927  919483 config.go:182] Loaded profile config "multinode-303478": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:20:33.292974  919483 status.go:174] checking status of multinode-303478 ...
	I0919 19:20:33.293796  919483 cli_runner.go:164] Run: docker container inspect multinode-303478 --format={{.State.Status}}
	I0919 19:20:33.312094  919483 status.go:364] multinode-303478 host status = "Stopped" (err=<nil>)
	I0919 19:20:33.312117  919483 status.go:377] host is not running, skipping remaining checks
	I0919 19:20:33.312123  919483 status.go:176] multinode-303478 status: &{Name:multinode-303478 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:20:33.312160  919483 status.go:174] checking status of multinode-303478-m02 ...
	I0919 19:20:33.312501  919483 cli_runner.go:164] Run: docker container inspect multinode-303478-m02 --format={{.State.Status}}
	I0919 19:20:33.328518  919483 status.go:364] multinode-303478-m02 host status = "Stopped" (err=<nil>)
	I0919 19:20:33.328542  919483 status.go:377] host is not running, skipping remaining checks
	I0919 19:20:33.328549  919483 status.go:176] multinode-303478-m02 status: &{Name:multinode-303478-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303478 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303478 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.36025597s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303478 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303478
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303478-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-303478-m02 --driver=docker  --container-runtime=docker: exit status 14 (92.369315ms)

                                                
                                                
-- stdout --
	* [multinode-303478-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-303478-m02' is duplicated with machine name 'multinode-303478-m02' in profile 'multinode-303478'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303478-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303478-m03 --driver=docker  --container-runtime=docker: (31.975788633s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-303478
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-303478: exit status 80 (352.977569ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-303478 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-303478-m03 already exists in multinode-303478-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-303478-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-303478-m03: (2.072739371s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.55s)

                                                
                                    
x
+
TestPreload (140.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-149412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0919 19:22:19.975431  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:22:57.567442  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-149412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.646275639s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-149412 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-149412 image pull gcr.io/k8s-minikube/busybox: (2.039215675s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-149412
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-149412: (10.746809504s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-149412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-149412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.042354531s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-149412 image list
helpers_test.go:175: Cleaning up "test-preload-149412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-149412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-149412: (2.275624044s)
--- PASS: TestPreload (140.04s)

                                                
                                    
x
+
TestScheduledStopUnix (106.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-852029 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-852029 --memory=2048 --driver=docker  --container-runtime=docker: (32.954838948s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-852029 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-852029 -n scheduled-stop-852029
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-852029 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 19:25:01.462602  738020 retry.go:31] will retry after 105.274µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.463760  738020 retry.go:31] will retry after 218.641µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.464910  738020 retry.go:31] will retry after 219.866µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.466052  738020 retry.go:31] will retry after 363.25µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.467185  738020 retry.go:31] will retry after 318.346µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.468336  738020 retry.go:31] will retry after 827.793µs: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.469468  738020 retry.go:31] will retry after 1.058473ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.470596  738020 retry.go:31] will retry after 1.855439ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.472808  738020 retry.go:31] will retry after 3.40373ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.477028  738020 retry.go:31] will retry after 3.657838ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.481219  738020 retry.go:31] will retry after 7.776011ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.489468  738020 retry.go:31] will retry after 4.987242ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.494704  738020 retry.go:31] will retry after 15.38552ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.510974  738020 retry.go:31] will retry after 11.693798ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.523212  738020 retry.go:31] will retry after 16.2122ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
I0919 19:25:01.540455  738020 retry.go:31] will retry after 35.50139ms: open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/scheduled-stop-852029/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-852029 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-852029 -n scheduled-stop-852029
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-852029
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-852029 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-852029
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-852029: exit status 7 (61.846359ms)

                                                
                                                
-- stdout --
	scheduled-stop-852029
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-852029 -n scheduled-stop-852029
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-852029 -n scheduled-stop-852029: exit status 7 (65.553144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-852029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-852029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-852029: (1.642354156s)
--- PASS: TestScheduledStopUnix (106.15s)

                                                
                                    
x
+
TestSkaffold (118.3s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4263543973 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-996823 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-996823 --memory=2600 --driver=docker  --container-runtime=docker: (31.471266139s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4263543973 run --minikube-profile skaffold-996823 --kube-context skaffold-996823 --status-check=true --port-forward=false --interactive=false
E0919 19:27:19.975353  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:27:57.568320  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4263543973 run --minikube-profile skaffold-996823 --kube-context skaffold-996823 --status-check=true --port-forward=false --interactive=false: (1m11.06863635s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-c996778cb-v5t59" [918a3789-c451-4f80-9047-dd4b404b2805] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005270402s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-d79476c47-stg9m" [3ba65038-58e1-48ab-ab3e-d42230ae8605] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003580709s
helpers_test.go:175: Cleaning up "skaffold-996823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-996823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-996823: (2.91784527s)
--- PASS: TestSkaffold (118.30s)

                                                
                                    
x
+
TestInsufficientStorage (11.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-791625 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-791625 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.06925855s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"968807f7-6ef3-4eaa-94c4-9f0c27d735d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-791625] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddee4b8f-756e-4d4b-9f33-4b4b9ef30d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"c2ee5208-4bd3-4d6d-8299-d116dfd614d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b7f0cf2b-21c3-4164-8d40-85210bff885e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig"}}
	{"specversion":"1.0","id":"880768a0-6d3c-4e73-90c2-39f60b2076c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube"}}
	{"specversion":"1.0","id":"0ee0f210-ef5a-46bf-978f-1f70af98cd26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1e031f35-b7b5-4d98-a0e5-97b20952bcb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"059cf848-3e55-4acf-9802-717c18baafce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0482d79f-749d-4d0b-9464-49ee6f4b6082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0aab6c68-df44-4d12-b455-108e4f23f09b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a61b284-93ec-423b-937f-9c290f3d5d0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a9c82247-32ea-4ec3-81e2-6dcbfc825594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-791625\" primary control-plane node in \"insufficient-storage-791625\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"49828b89-ada2-4e7a-82dc-d6406e9159a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e47e42c-d6d9-4c8a-b7b6-8024e8cf6885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4b158a8-9f87-4355-a2fb-23ca37db5be1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-791625 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-791625 --output=json --layout=cluster: exit status 7 (294.473183ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791625","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791625","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:28:21.800331  953633 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-791625" does not appear in /home/jenkins/minikube-integration/19664-732615/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-791625 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-791625 --output=json --layout=cluster: exit status 7 (307.22811ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791625","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791625","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:28:22.107778  953694 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-791625" does not appear in /home/jenkins/minikube-integration/19664-732615/kubeconfig
	E0919 19:28:22.118021  953694 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/insufficient-storage-791625/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-791625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-791625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-791625: (1.686192387s)
--- PASS: TestInsufficientStorage (11.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4128470600 start -p running-upgrade-160943 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4128470600 start -p running-upgrade-160943 --memory=2200 --vm-driver=docker  --container-runtime=docker: (47.940992566s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-160943 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-160943 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.144297287s)
helpers_test.go:175: Cleaning up "running-upgrade-160943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-160943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-160943: (2.277396735s)
--- PASS: TestRunningBinaryUpgrade (96.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (138.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0919 19:35:23.047383  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:35:42.366490  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.460525068s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-858291
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-858291: (11.084601488s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-858291 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-858291 status --format={{.Host}}: exit status 7 (77.369778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.022069929s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-858291 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (149.790463ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-858291] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-858291
	    minikube start -p kubernetes-upgrade-858291 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8582912 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-858291 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0919 19:37:19.974893  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858291 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.580630848s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-858291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-858291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-858291: (2.890440704s)
--- PASS: TestKubernetesUpgrade (138.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.109292637 start -p missing-upgrade-626933 --memory=2200 --driver=docker  --container-runtime=docker
E0919 19:34:20.445026  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.109292637 start -p missing-upgrade-626933 --memory=2200 --driver=docker  --container-runtime=docker: (41.550456388s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-626933
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-626933: (10.366772155s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-626933
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-626933 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-626933 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m5.823924866s)
helpers_test.go:175: Cleaning up "missing-upgrade-626933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-626933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-626933: (2.305117598s)
--- PASS: TestMissingContainerUpgrade (121.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (92.956921ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-429083] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-732615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-732615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-429083 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-429083 --driver=docker  --container-runtime=docker: (43.845868405s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-429083 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --driver=docker  --container-runtime=docker: (16.164675876s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-429083 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-429083 status -o json: exit status 2 (318.826063ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-429083","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-429083
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-429083: (1.73543191s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-429083 --no-kubernetes --driver=docker  --container-runtime=docker: (6.985697305s)
--- PASS: TestNoKubernetes/serial/Start (6.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-429083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-429083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.696571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-429083
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-429083: (1.241789627s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-429083 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-429083 --driver=docker  --container-runtime=docker: (8.658353042s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-429083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-429083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.168406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1748312874 start -p stopped-upgrade-397250 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0919 19:32:19.975879  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:57.568418  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.506709  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.513771  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.525181  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.546560  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.587953  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.669308  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:58.830822  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:59.152402  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:59.794367  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:01.075689  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:03.637908  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:08.759708  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:19.001124  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1748312874 start -p stopped-upgrade-397250 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m33.085401593s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1748312874 -p stopped-upgrade-397250 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1748312874 -p stopped-upgrade-397250 stop: (10.90925079s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-397250 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0919 19:33:39.483167  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-397250 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.275451169s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-397250
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-397250: (1.405831787s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (90.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-779846 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-779846 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m30.017827238s)
--- PASS: TestPause/serial/Start (90.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0919 19:37:57.568232  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:37:58.507017  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:38:26.208359  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (53.658279682s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-406747 "pgrep -a kubelet"
I0919 19:38:42.745544  738020 config.go:182] Loaded profile config "auto-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rhql4" [d67f4997-912c-4df8-b612-73d708ee9d54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rhql4" [d67f4997-912c-4df8-b612-73d708ee9d54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004472931s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-779846 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-779846 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.583308227s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m16.188414122s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-779846 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-779846 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-779846 --output=json --layout=cluster: exit status 2 (547.654323ms)

                                                
                                                
-- stdout --
	{"Name":"pause-779846","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-779846","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-779846 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-779846 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-779846 --alsologtostderr -v=5: (1.043269003s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.51s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-779846 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-779846 --alsologtostderr -v=5: (2.505885926s)
--- PASS: TestPause/serial/DeletePaused (2.51s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.906568943s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-779846
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-779846: exit status 1 (17.028677ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-779846: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m18.797810983s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b5bcm" [16245d1b-24c7-4584-8ccb-05af5fac93da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003770384s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-406747 "pgrep -a kubelet"
I0919 19:40:39.192686  738020 config.go:182] Loaded profile config "kindnet-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-67tk6" [17af4040-e01b-4387-83a1-4ddc5b342f81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-67tk6" [17af4040-e01b-4387-83a1-4ddc5b342f81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005115018s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p5jnx" [1be31c7e-1086-430a-bcb3-af932247bde4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004163946s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.552258243s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-406747 "pgrep -a kubelet"
I0919 19:41:19.168596  738020 config.go:182] Loaded profile config "calico-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f9cbz" [6bba092a-f7ca-40b6-9bf4-515ad345d29b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f9cbz" [6bba092a-f7ca-40b6-9bf4-515ad345d29b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004348923s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m20.695853009s)
--- PASS: TestNetworkPlugins/group/false/Start (80.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-406747 "pgrep -a kubelet"
I0919 19:42:19.869164  738020 config.go:182] Loaded profile config "custom-flannel-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-406747 replace --force -f testdata/netcat-deployment.yaml
E0919 19:42:19.975578  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dbp8q" [870bfc80-785d-47c3-93f4-93a2ccf8aca5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dbp8q" [870bfc80-785d-47c3-93f4-93a2ccf8aca5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005367613s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0919 19:42:57.568075  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:42:58.507287  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m14.04015482s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-406747 "pgrep -a kubelet"
I0919 19:43:21.962177  738020 config.go:182] Loaded profile config "false-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g2gzw" [94a15adf-c5f1-4fc7-9797-8ebca1ac20dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g2gzw" [94a15adf-c5f1-4fc7-9797-8ebca1ac20dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003933567s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0919 19:44:03.543138  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (56.268238869s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-406747 "pgrep -a kubelet"
I0919 19:44:10.745029  738020 config.go:182] Loaded profile config "enable-default-cni-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gtcl6" [752a3c7e-e6af-4184-83c8-80d59e8d434c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gtcl6" [752a3c7e-e6af-4184-83c8-80d59e8d434c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005650712s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (51.346165252s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9d9gs" [004cf47d-9323-4df5-93aa-c632985c40be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004010343s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-406747 "pgrep -a kubelet"
I0919 19:44:57.653815  738020 config.go:182] Loaded profile config "flannel-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tx46j" [cabc384d-6ca5-4e7d-86c6-75f754eeeb46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tx46j" [cabc384d-6ca5-4e7d-86c6-75f754eeeb46] Running
E0919 19:45:04.986897  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004972204s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (61.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0919 19:45:38.005364  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-406747 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m1.733412499s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (61.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-406747 "pgrep -a kubelet"
I0919 19:45:39.848069  738020 config.go:182] Loaded profile config "bridge-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-406747 replace --force -f testdata/netcat-deployment.yaml
I0919 19:45:40.325683  738020 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7gckb" [3da43644-efb3-4125-8061-3b9d8be3fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 19:45:43.127409  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7gckb" [3da43644-efb3-4125-8061-3b9d8be3fcc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005079307s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (145.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-681635 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0919 19:46:17.944001  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:46:23.066021  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:46:26.908964  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:46:33.307517  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-681635 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m25.869895855s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (145.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-406747 "pgrep -a kubelet"
I0919 19:46:37.623918  738020 config.go:182] Loaded profile config "kubenet-406747": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-406747 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9zhz2" [0d1d51fc-8255-4787-8fd0-195acbbf7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9zhz2" [0d1d51fc-8255-4787-8fd0-195acbbf7d4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004603396s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-406747 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-406747 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-668127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:47:19.974788  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.211052  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.218240  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.230411  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.253848  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.298267  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.379991  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.541264  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:20.863372  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:21.505143  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:22.786612  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:25.348370  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:30.470076  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:34.751947  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:40.649931  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:40.712344  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:57.568298  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:58.506667  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-668127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (46.647280897s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-668127 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96450e07-4639-4c76-b759-b136fc1ac3fe] Pending
E0919 19:48:01.193881  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [96450e07-4639-4c76-b759-b136fc1ac3fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96450e07-4639-4c76-b759-b136fc1ac3fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013649082s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-668127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-668127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-668127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-668127 --alsologtostderr -v=3
E0919 19:48:16.732982  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-668127 --alsologtostderr -v=3: (11.03167038s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-668127 -n embed-certs-668127
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-668127 -n embed-certs-668127: exit status 7 (70.69836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-668127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0919 19:48:22.351495  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.359723  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.371692  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.393031  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.434269  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-668127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:48:22.516167  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.677656  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:22.999401  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:23.641790  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:24.923370  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:27.485424  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:32.607303  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:42.156025  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-668127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.702977248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-668127 -n embed-certs-668127
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-681635 create -f testdata/busybox.yaml
E0919 19:48:42.849580  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1451d906-939d-48e0-adbf-8768ada0efd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 19:48:43.034238  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [1451d906-939d-48e0-adbf-8768ada0efd3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004253955s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-681635 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-681635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-681635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089245943s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-681635 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-681635 --alsologtostderr -v=3
E0919 19:48:56.673604  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:03.331322  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-681635 --alsologtostderr -v=3: (10.943152867s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-681635 -n old-k8s-version-681635
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-681635 -n old-k8s-version-681635: exit status 7 (74.872383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-681635 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (141.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-681635 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0919 19:49:10.750611  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.076605  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.083233  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.094702  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.116035  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.157413  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.238799  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.399998  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:11.721676  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:12.363164  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:13.646024  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:16.208490  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:21.329863  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:21.570402  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:31.572123  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:44.293072  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.297973  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.304590  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.315982  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.337539  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.378966  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.460433  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.622245  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:51.944137  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:52.053543  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:52.585481  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:53.867151  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:49:56.429067  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:01.550448  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:04.078173  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:11.792414  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:32.274383  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:32.812782  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:33.015725  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.296061  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.302450  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.313872  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.335250  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.376662  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.458148  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.619715  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.941473  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:41.583302  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:42.864815  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:45.426246  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:50.548082  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:00.574860  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:00.790269  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:06.214356  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:12.812556  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:13.236413  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:21.271899  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-681635 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.889766899s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-681635 -n old-k8s-version-681635
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (141.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sw6gq" [cc15af45-7192-4c91-86ec-69dea564e1a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004811071s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sw6gq" [cc15af45-7192-4c91-86ec-69dea564e1a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004379367s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-681635 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0919 19:51:37.866623  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:37.872969  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:37.884364  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:37.905765  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:37.947359  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-681635 image list --format=json
E0919 19:51:38.029637  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:38.191160  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-681635 --alsologtostderr -v=1
E0919 19:51:38.513385  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-681635 -n old-k8s-version-681635
E0919 19:51:39.155464  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-681635 -n old-k8s-version-681635: exit status 2 (355.469527ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-681635 -n old-k8s-version-681635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-681635 -n old-k8s-version-681635: exit status 2 (334.751846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-681635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-681635 -n old-k8s-version-681635
E0919 19:51:40.437625  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:40.515337  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-681635 -n old-k8s-version-681635
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-424281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:51:48.125360  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:54.937046  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:58.366938  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:02.233763  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:03.049010  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:18.848231  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:19.975150  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:20.211441  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-424281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (50.090383812s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-424281 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77be4a1e-be3d-4d71-84a9-d9fdb95b67b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 19:52:35.158354  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [77be4a1e-be3d-4d71-84a9-d9fdb95b67b8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00333925s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-424281 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-424281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-424281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069486604s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-424281 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-424281 --alsologtostderr -v=3
E0919 19:52:47.920042  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-424281 --alsologtostderr -v=3: (10.896093323s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ksbpl" [67823295-dab0-4da7-9548-962c02a31893] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00422366s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-424281 -n no-preload-424281
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-424281 -n no-preload-424281: exit status 7 (72.42079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-424281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ksbpl" [67823295-dab0-4da7-9548-962c02a31893] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004356839s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-668127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-424281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:52:57.567440  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:58.506583  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:52:59.809609  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-424281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m29.453713005s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-424281 -n no-preload-424281
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-668127 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-668127 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-668127 -n embed-certs-668127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-668127 -n embed-certs-668127: exit status 2 (410.283482ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-668127 -n embed-certs-668127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-668127 -n embed-certs-668127: exit status 2 (473.220306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-668127 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-668127 -n embed-certs-668127
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-668127 -n embed-certs-668127
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-589737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:53:22.351407  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:24.155148  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:42.945437  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:42.951839  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:42.963271  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:42.984705  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:43.026073  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:43.034620  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:43.107950  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:43.269451  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:43.591088  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:44.232868  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:45.514621  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:48.075883  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:50.056633  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:53.198152  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-589737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (54.28142183s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-589737 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d9c0877-3955-464c-a9d5-95ac4c0695a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 19:54:03.439558  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4d9c0877-3955-464c-a9d5-95ac4c0695a0] Running
E0919 19:54:11.076116  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004040089s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-589737 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-589737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-589737 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-589737 --alsologtostderr -v=3
E0919 19:54:21.731897  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:23.921975  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-589737 --alsologtostderr -v=3: (11.044580546s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737: exit status 7 (72.077911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-589737 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-589737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:54:38.778894  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/enable-default-cni-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:51.297752  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:55:04.883942  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:55:19.006220  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:55:32.812494  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kindnet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:55:40.296377  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:56:07.996657  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/bridge-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:56:12.812796  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/calico-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:56:26.806023  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:56:37.866608  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:57:05.573760  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/kubenet-406747/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:57:19.974742  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/functional-273009/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:57:20.211253  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/custom-flannel-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-589737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m29.100478462s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rpbt7" [21338122-8ed0-4f91-8fbc-cc96970f61b2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004388198s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rpbt7" [21338122-8ed0-4f91-8fbc-cc96970f61b2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004211607s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-424281 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-424281 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-424281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-424281 -n no-preload-424281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-424281 -n no-preload-424281: exit status 2 (340.3706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-424281 -n no-preload-424281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-424281 -n no-preload-424281: exit status 2 (349.575937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-424281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-424281 -n no-preload-424281
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-424281 -n no-preload-424281
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-797500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:57:57.568094  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/addons-810228/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:57:58.507422  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/skaffold-996823/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-797500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (38.525557732s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-797500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0919 19:58:22.351508  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/false-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-797500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071986879s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-797500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-797500 --alsologtostderr -v=3: (5.744316911s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797500 -n newest-cni-797500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797500 -n newest-cni-797500: exit status 7 (77.821895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-797500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-797500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:58:42.945219  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/old-k8s-version-681635/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:43.034663  738020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/auto-406747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-797500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (20.477348038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797500 -n newest-cni-797500
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-797500 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-797500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797500 -n newest-cni-797500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797500 -n newest-cni-797500: exit status 2 (368.900649ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797500 -n newest-cni-797500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797500 -n newest-cni-797500: exit status 2 (397.790355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-797500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797500 -n newest-cni-797500
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797500 -n newest-cni-797500
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vkdj6" [be077cd2-04b5-4a6d-bf4d-5fe612a51f47] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004643129s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vkdj6" [be077cd2-04b5-4a6d-bf4d-5fe612a51f47] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003115756s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-589737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-589737 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-589737 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737: exit status 2 (323.77197ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737: exit status 2 (342.526159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-589737 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-589737 -n default-k8s-diff-port-589737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-690855 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-690855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-690855
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-406747 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-406747" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-732615/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:29:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-856992
contexts:
- context:
cluster: offline-docker-856992
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:29:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-856992
name: offline-docker-856992
current-context: offline-docker-856992
kind: Config
preferences: {}
users:
- name: offline-docker-856992
user:
client-certificate: /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/offline-docker-856992/client.crt
client-key: /home/jenkins/minikube-integration/19664-732615/.minikube/profiles/offline-docker-856992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-406747

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-406747" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406747"

                                                
                                                
----------------------- debugLogs end: cilium-406747 [took: 4.072368714s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-406747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-406747
--- SKIP: TestNetworkPlugins/group/cilium (4.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-398161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-398161
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard