Test Report: Docker_Linux_docker_arm64 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 75.17
x
+
TestAddons/parallel/Registry (75.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.054369ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004242145s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006789781s
addons_test.go:342: (dbg) Run:  kubectl --context addons-018527 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126547687s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 ip
2024/09/10 17:43:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-018527
helpers_test.go:235: (dbg) docker inspect addons-018527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef",
	        "Created": "2024-09-10T17:30:01.57169032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-10T17:30:01.774458637Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a4261f15fdf40db09c0b78a1feabe6bd85433327166d5c98909d23a556dff45f",
	        "ResolvConfPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/hostname",
	        "HostsPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/hosts",
	        "LogPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef-json.log",
	        "Name": "/addons-018527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-018527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-018527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617-init/diff:/var/lib/docker/overlay2/8cfe895502caa769e65b1686e7e1e919ac585a6fa1d0a386b9d76045d1757d52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/merged",
	                "UpperDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/diff",
	                "WorkDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-018527",
	                "Source": "/var/lib/docker/volumes/addons-018527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-018527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-018527",
	                "name.minikube.sigs.k8s.io": "addons-018527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2850c2cb8efd269102daae53cea680dc35aa5f039b665837eb72ca69f1fe2223",
	            "SandboxKey": "/var/run/docker/netns/2850c2cb8efd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-018527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "de2828d51f4c167fea23931843cb56718b83027887c3a3a825b8d99f09967148",
	                    "EndpointID": "4e4507f1bb5b80e9dc772124c3e95db3b0258bfbad1281a8354a01d91ca100c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-018527",
	                        "405e529c548a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-018527 -n addons-018527
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 logs -n 25: (1.483602904s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-933311              | download-only-933311   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only              | download-only-643138   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-643138              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-643138              | download-only-643138   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-933311              | download-only-933311   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-643138              | download-only-643138   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                   | download-docker-686092 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | download-docker-686092               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-686092            | download-docker-686092 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                   | binary-mirror-558808   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | binary-mirror-558808                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38421               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-558808              | binary-mirror-558808   | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | enable dashboard -p                  | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-018527                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-018527                        |                        |         |         |                     |                     |
	| start   | -p addons-018527 --wait=true         | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:33 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-018527 addons disable         | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:33 UTC | 10 Sep 24 17:34 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-018527 addons                 | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-018527 addons                 | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-018527 addons                 | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | addons-018527                        |                        |         |         |                     |                     |
	| ssh     | addons-018527 ssh curl -s            | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-018527 ip                     | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	| addons  | addons-018527 addons disable         | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-018527 addons disable         | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC |                     |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| ip      | addons-018527 ip                     | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	| addons  | addons-018527 addons disable         | addons-018527          | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:36.606140    8286 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:36.606546    8286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:36.606560    8286 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:36.606566    8286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:36.606925    8286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:29:36.607607    8286 out.go:352] Setting JSON to false
	I0910 17:29:36.608327    8286 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":724,"bootTime":1725988653,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0910 17:29:36.608397    8286 start.go:139] virtualization:  
	I0910 17:29:36.612515    8286 out.go:177] * [addons-018527] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 17:29:36.614669    8286 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:36.614727    8286 notify.go:220] Checking for updates...
	I0910 17:29:36.618427    8286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:36.620428    8286 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:29:36.622361    8286 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	I0910 17:29:36.624300    8286 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 17:29:36.626166    8286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:36.628241    8286 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:36.661206    8286 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 17:29:36.661313    8286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:36.725060    8286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:36.714989787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:36.725167    8286 docker.go:318] overlay module found
	I0910 17:29:36.729046    8286 out.go:177] * Using the docker driver based on user configuration
	I0910 17:29:36.730988    8286 start.go:297] selected driver: docker
	I0910 17:29:36.731009    8286 start.go:901] validating driver "docker" against <nil>
	I0910 17:29:36.731024    8286 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:36.731681    8286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:36.785829    8286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:36.77658729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:36.785986    8286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:36.786220    8286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:36.788509    8286 out.go:177] * Using Docker driver with root privileges
	I0910 17:29:36.790519    8286 cni.go:84] Creating CNI manager for ""
	I0910 17:29:36.790551    8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:29:36.790563    8286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:36.790657    8286 start.go:340] cluster config:
	{Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:36.792861    8286 out.go:177] * Starting "addons-018527" primary control-plane node in "addons-018527" cluster
	I0910 17:29:36.794998    8286 cache.go:121] Beginning downloading kic base image for docker with docker
	I0910 17:29:36.797164    8286 out.go:177] * Pulling base image v0.0.45-1725963390-19606 ...
	I0910 17:29:36.799222    8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:36.799259    8286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 17:29:36.799281    8286 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 17:29:36.799298    8286 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:36.799380    8286 preload.go:172] Found /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 17:29:36.799390    8286 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 17:29:36.799724    8286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json ...
	I0910 17:29:36.799749    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json: {Name:mk124bf20b951e096c327decf76be8ea8a9c9f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:36.815692    8286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 17:29:36.815865    8286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 17:29:36.815888    8286 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory, skipping pull
	I0910 17:29:36.815899    8286 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 exists in cache, skipping pull
	I0910 17:29:36.815907    8286 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 as a tarball
	I0910 17:29:36.815913    8286 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from local cache
	I0910 17:29:54.607314    8286 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from cached tarball
	I0910 17:29:54.607351    8286 cache.go:194] Successfully downloaded all kic artifacts
	I0910 17:29:54.607379    8286 start.go:360] acquireMachinesLock for addons-018527: {Name:mkd0ce81edb47e790f272bf643f50e7d96e61889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:54.607488    8286 start.go:364] duration metric: took 88.41µs to acquireMachinesLock for "addons-018527"
	I0910 17:29:54.607513    8286 start.go:93] Provisioning new machine with config: &{Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:29:54.607600    8286 start.go:125] createHost starting for "" (driver="docker")
	I0910 17:29:54.610214    8286 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0910 17:29:54.610594    8286 start.go:159] libmachine.API.Create for "addons-018527" (driver="docker")
	I0910 17:29:54.610631    8286 client.go:168] LocalClient.Create starting
	I0910 17:29:54.610737    8286 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem
	I0910 17:29:54.844752    8286 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem
	I0910 17:29:55.307671    8286 cli_runner.go:164] Run: docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0910 17:29:55.321747    8286 cli_runner.go:211] docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0910 17:29:55.321835    8286 network_create.go:284] running [docker network inspect addons-018527] to gather additional debugging logs...
	I0910 17:29:55.321856    8286 cli_runner.go:164] Run: docker network inspect addons-018527
	W0910 17:29:55.337944    8286 cli_runner.go:211] docker network inspect addons-018527 returned with exit code 1
	I0910 17:29:55.337976    8286 network_create.go:287] error running [docker network inspect addons-018527]: docker network inspect addons-018527: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-018527 not found
	I0910 17:29:55.337988    8286 network_create.go:289] output of [docker network inspect addons-018527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-018527 not found
	
	** /stderr **
	I0910 17:29:55.338076    8286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0910 17:29:55.355766    8286 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017691c0}
	I0910 17:29:55.355805    8286 network_create.go:124] attempt to create docker network addons-018527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0910 17:29:55.355863    8286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-018527 addons-018527
	I0910 17:29:55.427742    8286 network_create.go:108] docker network addons-018527 192.168.49.0/24 created
	I0910 17:29:55.427770    8286 kic.go:121] calculated static IP "192.168.49.2" for the "addons-018527" container
	I0910 17:29:55.427837    8286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0910 17:29:55.443851    8286 cli_runner.go:164] Run: docker volume create addons-018527 --label name.minikube.sigs.k8s.io=addons-018527 --label created_by.minikube.sigs.k8s.io=true
	I0910 17:29:55.461983    8286 oci.go:103] Successfully created a docker volume addons-018527
	I0910 17:29:55.462081    8286 cli_runner.go:164] Run: docker run --rm --name addons-018527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --entrypoint /usr/bin/test -v addons-018527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib
	I0910 17:29:57.661341    8286 cli_runner.go:217] Completed: docker run --rm --name addons-018527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --entrypoint /usr/bin/test -v addons-018527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib: (2.199207811s)
	I0910 17:29:57.661369    8286 oci.go:107] Successfully prepared a docker volume addons-018527
	I0910 17:29:57.661399    8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:57.661417    8286 kic.go:194] Starting extracting preloaded images to volume ...
	I0910 17:29:57.661500    8286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-018527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0910 17:30:01.497417    8286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-018527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir: (3.835879022s)
	I0910 17:30:01.497452    8286 kic.go:203] duration metric: took 3.83603144s to extract preloaded images to volume ...
	W0910 17:30:01.497647    8286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0910 17:30:01.497795    8286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0910 17:30:01.554442    8286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-018527 --name addons-018527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-018527 --network addons-018527 --ip 192.168.49.2 --volume addons-018527:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9
	I0910 17:30:01.946009    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Running}}
	I0910 17:30:01.970444    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:01.997379    8286 cli_runner.go:164] Run: docker exec addons-018527 stat /var/lib/dpkg/alternatives/iptables
	I0910 17:30:02.097701    8286 oci.go:144] the created container "addons-018527" has a running status.
	I0910 17:30:02.097738    8286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa...
	I0910 17:30:02.455924    8286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0910 17:30:02.486696    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:02.504718    8286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0910 17:30:02.504737    8286 kic_runner.go:114] Args: [docker exec --privileged addons-018527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0910 17:30:02.557819    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:02.583882    8286 machine.go:93] provisionDockerMachine start ...
	I0910 17:30:02.583971    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:02.606353    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:02.606634    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:02.606648    8286 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 17:30:02.758072    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018527
	
	I0910 17:30:02.758137    8286 ubuntu.go:169] provisioning hostname "addons-018527"
	I0910 17:30:02.758236    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:02.777322    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:02.777572    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:02.777588    8286 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-018527 && echo "addons-018527" | sudo tee /etc/hostname
	I0910 17:30:02.938054    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018527
	
	I0910 17:30:02.938173    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:02.960580    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:02.960845    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:02.960873    8286 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-018527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-018527/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-018527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:30:03.123157    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:30:03.123236    8286 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19598-2209/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-2209/.minikube}
	I0910 17:30:03.123274    8286 ubuntu.go:177] setting up certificates
	I0910 17:30:03.123314    8286 provision.go:84] configureAuth start
	I0910 17:30:03.123438    8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
	I0910 17:30:03.144259    8286 provision.go:143] copyHostCerts
	I0910 17:30:03.144350    8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/ca.pem (1082 bytes)
	I0910 17:30:03.144482    8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/cert.pem (1123 bytes)
	I0910 17:30:03.144545    8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/key.pem (1679 bytes)
	I0910 17:30:03.144595    8286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem org=jenkins.addons-018527 san=[127.0.0.1 192.168.49.2 addons-018527 localhost minikube]
	I0910 17:30:03.767147    8286 provision.go:177] copyRemoteCerts
	I0910 17:30:03.767218    8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:30:03.767295    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:03.788041    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:03.883281    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:30:03.907885    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:30:03.933405    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:30:03.957786    8286 provision.go:87] duration metric: took 834.441741ms to configureAuth
	I0910 17:30:03.957816    8286 ubuntu.go:193] setting minikube options for container-runtime
	I0910 17:30:03.958061    8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:03.958134    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:03.978858    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:03.979105    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:03.979120    8286 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 17:30:04.130984    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0910 17:30:04.131004    8286 ubuntu.go:71] root file system type: overlay
	I0910 17:30:04.131165    8286 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 17:30:04.131240    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:04.150771    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:04.151124    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:04.151224    8286 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 17:30:04.290915    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 17:30:04.291019    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:04.310270    8286 main.go:141] libmachine: Using SSH client type: native
	I0910 17:30:04.310557    8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0910 17:30:04.310579    8286 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 17:30:05.175971    8286 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-10 17:30:04.284752176 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0910 17:30:05.176005    8286 machine.go:96] duration metric: took 2.592104047s to provisionDockerMachine
	I0910 17:30:05.176018    8286 client.go:171] duration metric: took 10.565379685s to LocalClient.Create
	I0910 17:30:05.176053    8286 start.go:167] duration metric: took 10.565460424s to libmachine.API.Create "addons-018527"
	I0910 17:30:05.176068    8286 start.go:293] postStartSetup for "addons-018527" (driver="docker")
	I0910 17:30:05.176080    8286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:30:05.176153    8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:30:05.176200    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:05.195275    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:05.287709    8286 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:30:05.290818    8286 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0910 17:30:05.290855    8286 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0910 17:30:05.290867    8286 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0910 17:30:05.290873    8286 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0910 17:30:05.290883    8286 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-2209/.minikube/addons for local assets ...
	I0910 17:30:05.290950    8286 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-2209/.minikube/files for local assets ...
	I0910 17:30:05.290978    8286 start.go:296] duration metric: took 114.904348ms for postStartSetup
	I0910 17:30:05.291278    8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
	I0910 17:30:05.308350    8286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json ...
	I0910 17:30:05.308644    8286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:30:05.308694    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:05.325782    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:05.415635    8286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0910 17:30:05.420357    8286 start.go:128] duration metric: took 10.812741817s to createHost
	I0910 17:30:05.420379    8286 start.go:83] releasing machines lock for "addons-018527", held for 10.812882633s
	I0910 17:30:05.420464    8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
	I0910 17:30:05.438439    8286 ssh_runner.go:195] Run: cat /version.json
	I0910 17:30:05.438492    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:05.438536    8286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:30:05.438596    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:05.457736    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:05.459134    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:05.676178    8286 ssh_runner.go:195] Run: systemctl --version
	I0910 17:30:05.680631    8286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 17:30:05.685645    8286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0910 17:30:05.713336    8286 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0910 17:30:05.713448    8286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:30:05.744284    8286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0910 17:30:05.744359    8286 start.go:495] detecting cgroup driver to use...
	I0910 17:30:05.744407    8286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0910 17:30:05.744543    8286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:30:05.762565    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 17:30:05.773067    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 17:30:05.784081    8286 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 17:30:05.784200    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 17:30:05.794413    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:30:05.804583    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 17:30:05.815047    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:30:05.825264    8286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:30:05.835358    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 17:30:05.845249    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 17:30:05.855798    8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 17:30:05.866094    8286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:30:05.875163    8286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:30:05.884069    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:05.974596    8286 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 17:30:06.109413    8286 start.go:495] detecting cgroup driver to use...
	I0910 17:30:06.109502    8286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0910 17:30:06.109577    8286 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 17:30:06.128194    8286 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0910 17:30:06.128342    8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 17:30:06.148130    8286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:30:06.167756    8286 ssh_runner.go:195] Run: which cri-dockerd
	I0910 17:30:06.172165    8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 17:30:06.182673    8286 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 17:30:06.207372    8286 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 17:30:06.317689    8286 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 17:30:06.423925    8286 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 17:30:06.424096    8286 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 17:30:06.446374    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:06.549184    8286 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 17:30:06.825524    8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 17:30:06.838451    8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:30:06.850755    8286 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 17:30:06.944051    8286 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 17:30:07.039558    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:07.148087    8286 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 17:30:07.163173    8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:30:07.174179    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:07.265255    8286 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 17:30:07.334640    8286 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 17:30:07.334785    8286 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 17:30:07.339124    8286 start.go:563] Will wait 60s for crictl version
	I0910 17:30:07.339239    8286 ssh_runner.go:195] Run: which crictl
	I0910 17:30:07.344173    8286 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:30:07.381511    8286 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0910 17:30:07.381653    8286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:30:07.403348    8286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:30:07.428976    8286 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.1 ...
	I0910 17:30:07.429107    8286 cli_runner.go:164] Run: docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0910 17:30:07.447428    8286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0910 17:30:07.451273    8286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:30:07.462516    8286 kubeadm.go:883] updating cluster {Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:30:07.462633    8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:30:07.462695    8286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:30:07.483779    8286 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 17:30:07.483813    8286 docker.go:615] Images already preloaded, skipping extraction
	I0910 17:30:07.483893    8286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:30:07.505133    8286 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 17:30:07.505157    8286 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:30:07.505177    8286 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0910 17:30:07.505275    8286 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-018527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:30:07.505357    8286 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 17:30:07.555821    8286 cni.go:84] Creating CNI manager for ""
	I0910 17:30:07.555855    8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:30:07.555870    8286 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:30:07.555892    8286 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-018527 NodeName:addons-018527 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:30:07.556038    8286 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-018527"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:30:07.556108    8286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:30:07.565774    8286 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:30:07.565854    8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:30:07.574753    8286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0910 17:30:07.594236    8286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:30:07.613348    8286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0910 17:30:07.632958    8286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0910 17:30:07.636561    8286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:30:07.647949    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:07.742247    8286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:07.758325    8286 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527 for IP: 192.168.49.2
	I0910 17:30:07.758397    8286 certs.go:194] generating shared ca certs ...
	I0910 17:30:07.758413    8286 certs.go:226] acquiring lock for ca certs: {Name:mk064211dcef1159c3fefad646daeaa676bc22b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:07.758528    8286 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key
	I0910 17:30:08.271435    8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt ...
	I0910 17:30:08.271488    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt: {Name:mk525f91ee991e7af186c1aa3251b98eaa768bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.271702    8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key ...
	I0910 17:30:08.271717    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key: {Name:mk359588b98040abe8d71cb1dff488dcd56fc6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.271829    8286 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key
	I0910 17:30:08.463432    8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt ...
	I0910 17:30:08.463460    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt: {Name:mk9ac8f7ff34ab23843e3e0a509474eaad42eace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.463632    8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key ...
	I0910 17:30:08.463645    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key: {Name:mkd28532052f6a8c196373722115cad6e3e4473d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.463726    8286 certs.go:256] generating profile certs ...
	I0910 17:30:08.463786    8286 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key
	I0910 17:30:08.463806    8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt with IP's: []
	I0910 17:30:08.720829    8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt ...
	I0910 17:30:08.720865    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: {Name:mk57a965600b99b73f1d5b2cb45135fcd8e23e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.721094    8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key ...
	I0910 17:30:08.721107    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key: {Name:mk2e6f1f97b44a486bba64c702c9a6809c6a0657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.721203    8286 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372
	I0910 17:30:08.721220    8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0910 17:30:09.070554    8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 ...
	I0910 17:30:09.070586    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372: {Name:mkb032b2cda693551797449aa0f56c82cb539253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.070870    8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372 ...
	I0910 17:30:09.070889    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372: {Name:mk6e82363d379138605c66d45a05727ea1246f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.071025    8286 certs.go:381] copying /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 -> /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt
	I0910 17:30:09.071135    8286 certs.go:385] copying /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372 -> /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key
	I0910 17:30:09.071211    8286 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key
	I0910 17:30:09.071232    8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt with IP's: []
	I0910 17:30:09.980793    8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt ...
	I0910 17:30:09.980827    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt: {Name:mkff6197ab2e7bf1e631f06a88f092437d386b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.981003    8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key ...
	I0910 17:30:09.981017    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key: {Name:mk814ac16fbd11f1d16abc8fd73241fd8297b6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.981211    8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 17:30:09.981252    8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:30:09.981278    8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:30:09.981306    8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem (1679 bytes)
	I0910 17:30:09.981888    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:30:10.045347    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:30:10.105060    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:30:10.139510    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 17:30:10.168686    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:30:10.194203    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:30:10.219741    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:30:10.247357    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 17:30:10.272068    8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:30:10.297357    8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:30:10.316193    8286 ssh_runner.go:195] Run: openssl version
	I0910 17:30:10.321799    8286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:30:10.331866    8286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:10.335965    8286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:10.336107    8286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:10.343311    8286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:30:10.354041    8286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:30:10.357832    8286 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:30:10.357890    8286 kubeadm.go:392] StartCluster: {Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:30:10.358022    8286 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 17:30:10.375222    8286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:30:10.384733    8286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:30:10.394296    8286 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0910 17:30:10.394480    8286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:30:10.404281    8286 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:30:10.404302    8286 kubeadm.go:157] found existing configuration files:
	
	I0910 17:30:10.404385    8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:30:10.413440    8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:30:10.413555    8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:30:10.422367    8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:30:10.431616    8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:30:10.431728    8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:30:10.440313    8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:30:10.450109    8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:30:10.450226    8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:30:10.459892    8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:30:10.470046    8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:30:10.470119    8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:30:10.479420    8286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0910 17:30:10.521839    8286 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:30:10.521937    8286 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:30:10.551722    8286 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0910 17:30:10.551804    8286 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0910 17:30:10.551844    8286 kubeadm.go:310] OS: Linux
	I0910 17:30:10.551898    8286 kubeadm.go:310] CGROUPS_CPU: enabled
	I0910 17:30:10.551963    8286 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0910 17:30:10.552024    8286 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0910 17:30:10.552090    8286 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0910 17:30:10.552141    8286 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0910 17:30:10.552221    8286 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0910 17:30:10.552278    8286 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0910 17:30:10.552345    8286 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0910 17:30:10.552405    8286 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0910 17:30:10.618404    8286 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:30:10.618557    8286 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:30:10.618677    8286 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:30:10.633265    8286 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:30:10.638497    8286 out.go:235]   - Generating certificates and keys ...
	I0910 17:30:10.638734    8286 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:30:10.638847    8286 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:30:10.797110    8286 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:30:11.454946    8286 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:30:11.768421    8286 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:30:12.163676    8286 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:30:12.588391    8286 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:30:12.588696    8286 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-018527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0910 17:30:13.155633    8286 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:30:13.155968    8286 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-018527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0910 17:30:13.872257    8286 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:30:14.348119    8286 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:30:14.634213    8286 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:30:14.634603    8286 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:30:14.951222    8286 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:30:15.170258    8286 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:30:15.792783    8286 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:30:16.167077    8286 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:30:17.458837    8286 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:30:17.459658    8286 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:30:17.463833    8286 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:30:17.466533    8286 out.go:235]   - Booting up control plane ...
	I0910 17:30:17.466650    8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:30:17.467107    8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:30:17.468386    8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:30:17.486095    8286 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:30:17.492293    8286 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:30:17.492353    8286 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:30:17.604729    8286 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:30:17.604861    8286 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:30:19.105938    8286 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501562351s
	I0910 17:30:19.106031    8286 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:25.108247    8286 kubeadm.go:310] [api-check] The API server is healthy after 6.002261459s
	I0910 17:30:25.128734    8286 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:25.144590    8286 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:25.183141    8286 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:25.183334    8286 kubeadm.go:310] [mark-control-plane] Marking the node addons-018527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:25.196383    8286 kubeadm.go:310] [bootstrap-token] Using token: ni4uj8.svsil8e4x0j42lib
	I0910 17:30:25.198223    8286 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:25.198386    8286 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:25.205277    8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:25.216568    8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:25.224174    8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:25.228949    8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:25.233296    8286 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:25.517177    8286 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:25.942944    8286 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:26.515999    8286 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:26.517302    8286 kubeadm.go:310] 
	I0910 17:30:26.517375    8286 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:26.517388    8286 kubeadm.go:310] 
	I0910 17:30:26.517468    8286 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:26.517477    8286 kubeadm.go:310] 
	I0910 17:30:26.517503    8286 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:26.517563    8286 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:26.517616    8286 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:26.517625    8286 kubeadm.go:310] 
	I0910 17:30:26.517677    8286 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:26.517685    8286 kubeadm.go:310] 
	I0910 17:30:26.517731    8286 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:26.517739    8286 kubeadm.go:310] 
	I0910 17:30:26.517790    8286 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:26.517865    8286 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:26.517935    8286 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:26.517943    8286 kubeadm.go:310] 
	I0910 17:30:26.518024    8286 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:26.518102    8286 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:26.518111    8286 kubeadm.go:310] 
	I0910 17:30:26.518192    8286 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ni4uj8.svsil8e4x0j42lib \
	I0910 17:30:26.518294    8286 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2949b2a2dda6376e1bb92d867ada754fab30b7a6343fd8388bdd9e6344c68eb2 \
	I0910 17:30:26.518318    8286 kubeadm.go:310] 	--control-plane 
	I0910 17:30:26.518322    8286 kubeadm.go:310] 
	I0910 17:30:26.518436    8286 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:26.518446    8286 kubeadm.go:310] 
	I0910 17:30:26.518525    8286 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ni4uj8.svsil8e4x0j42lib \
	I0910 17:30:26.518626    8286 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2949b2a2dda6376e1bb92d867ada754fab30b7a6343fd8388bdd9e6344c68eb2 
	I0910 17:30:26.521134    8286 kubeadm.go:310] W0910 17:30:10.518238    1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:26.521495    8286 kubeadm.go:310] W0910 17:30:10.519161    1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:26.521743    8286 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0910 17:30:26.521882    8286 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:30:26.521910    8286 cni.go:84] Creating CNI manager for ""
	I0910 17:30:26.521925    8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:30:26.525757    8286 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:26.528094    8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:26.537813    8286 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:26.558065    8286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:26.558187    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:26.558273    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-018527 minikube.k8s.io/updated_at=2024_09_10T17_30_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-018527 minikube.k8s.io/primary=true
	I0910 17:30:26.705998    8286 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:26.706102    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:27.206734    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:27.706398    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:28.206269    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:28.706633    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:29.206805    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:29.707183    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:30.207304    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:30.706382    8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:30.808584    8286 kubeadm.go:1113] duration metric: took 4.250436701s to wait for elevateKubeSystemPrivileges
	I0910 17:30:30.808611    8286 kubeadm.go:394] duration metric: took 20.45072418s to StartCluster
	I0910 17:30:30.808627    8286 settings.go:142] acquiring lock: {Name:mk08d9d8b25bc27f9f84ae0f54ae1e531fa50eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:30.808734    8286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:30:30.809142    8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/kubeconfig: {Name:mk6dfa0cdc9dcc6fca3c984f41ed79b7f8cca436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:30.809320    8286 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:30:30.809411    8286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:30.809666    8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:30.809697    8286 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:30.809778    8286 addons.go:69] Setting yakd=true in profile "addons-018527"
	I0910 17:30:30.809802    8286 addons.go:234] Setting addon yakd=true in "addons-018527"
	I0910 17:30:30.809826    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.810298    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.810467    8286 addons.go:69] Setting inspektor-gadget=true in profile "addons-018527"
	I0910 17:30:30.810495    8286 addons.go:234] Setting addon inspektor-gadget=true in "addons-018527"
	I0910 17:30:30.810517    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.810885    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.811390    8286 addons.go:69] Setting cloud-spanner=true in profile "addons-018527"
	I0910 17:30:30.811425    8286 addons.go:234] Setting addon cloud-spanner=true in "addons-018527"
	I0910 17:30:30.811449    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.811830    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815129    8286 addons.go:69] Setting metrics-server=true in profile "addons-018527"
	I0910 17:30:30.815180    8286 addons.go:69] Setting gcp-auth=true in profile "addons-018527"
	I0910 17:30:30.815226    8286 addons.go:234] Setting addon metrics-server=true in "addons-018527"
	I0910 17:30:30.815276    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.815723    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.816337    8286 addons.go:69] Setting volcano=true in profile "addons-018527"
	I0910 17:30:30.816382    8286 addons.go:234] Setting addon volcano=true in "addons-018527"
	I0910 17:30:30.816412    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.816842    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.820736    8286 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-018527"
	I0910 17:30:30.820789    8286 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-018527"
	I0910 17:30:30.820825    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.821260    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815276    8286 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:30.815174    8286 addons.go:69] Setting default-storageclass=true in profile "addons-018527"
	I0910 17:30:30.838481    8286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-018527"
	I0910 17:30:30.838815    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.840228    8286 addons.go:69] Setting registry=true in profile "addons-018527"
	I0910 17:30:30.840275    8286 addons.go:234] Setting addon registry=true in "addons-018527"
	I0910 17:30:30.840313    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.843898    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815236    8286 addons.go:69] Setting ingress=true in profile "addons-018527"
	I0910 17:30:30.854430    8286 addons.go:234] Setting addon ingress=true in "addons-018527"
	I0910 17:30:30.854479    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.855106    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.857295    8286 addons.go:69] Setting storage-provisioner=true in profile "addons-018527"
	I0910 17:30:30.857354    8286 addons.go:234] Setting addon storage-provisioner=true in "addons-018527"
	I0910 17:30:30.857389    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.857975    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815241    8286 addons.go:69] Setting ingress-dns=true in profile "addons-018527"
	I0910 17:30:30.871956    8286 addons.go:234] Setting addon ingress-dns=true in "addons-018527"
	I0910 17:30:30.872007    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.872048    8286 addons.go:69] Setting volumesnapshots=true in profile "addons-018527"
	I0910 17:30:30.872080    8286 addons.go:234] Setting addon volumesnapshots=true in "addons-018527"
	I0910 17:30:30.872097    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.872529    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.884537    8286 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-018527"
	I0910 17:30:30.884578    8286 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-018527"
	I0910 17:30:30.884904    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815166    8286 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-018527"
	I0910 17:30:30.901093    8286 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-018527"
	I0910 17:30:30.901133    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:30.901631    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.815231    8286 mustload.go:65] Loading cluster: addons-018527
	I0910 17:30:30.930606    8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:30.930883    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.979718    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:30.986921    8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:31.056656    8286 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:31.069429    8286 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:31.069555    8286 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:31.069567    8286 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:31.069655    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.072991    8286 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:31.073021    8286 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:31.073114    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.080495    8286 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:31.080655    8286 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 17:30:31.082470    8286 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:31.090912    8286 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:31.091087    8286 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:31.091125    8286 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:31.091187    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.095439    8286 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:31.095480    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:31.095861    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.099219    8286 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:31.107130    8286 addons.go:234] Setting addon default-storageclass=true in "addons-018527"
	I0910 17:30:31.107181    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:31.107608    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:31.126305    8286 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:31.126358    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:31.126424    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.126734    8286 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 17:30:31.128864    8286 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0910 17:30:31.135101    8286 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:31.135154    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 17:30:31.135229    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.153172    8286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:31.153298    8286 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:31.155191    8286 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:31.155215    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:31.155289    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.160480    8286 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:31.160523    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:31.160615    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.170426    8286 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-018527"
	I0910 17:30:31.170469    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:31.170876    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:31.177442    8286 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:30:31.208610    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:31.210461    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:31.210486    8286 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:31.210562    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.214986    8286 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:31.229353    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:31.246526    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:31.249633    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:31.251383    8286 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:31.278668    8286 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:31.278732    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:30:31.278809    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.278977    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:31.281140    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:31.284583    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:31.290816    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:31.315296    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:31.325310    8286 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:31.328097    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:31.328209    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:31.328331    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.345213    8286 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:30:31.350055    8286 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:31.350077    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:30:31.350152    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.363374    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.373019    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.398227    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.400711    8286 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:31.400730    8286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:31.400801    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.438198    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.443336    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.453627    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.482660    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.492112    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.507550    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.507952    8286 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:31.514513    8286 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:31.520315    8286 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:31.520337    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:31.520409    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:31.532530    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.543410    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	W0910 17:30:31.546633    8286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0910 17:30:31.546663    8286 retry.go:31] will retry after 143.140377ms: ssh: handshake failed: EOF
	I0910 17:30:31.557900    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.560598    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:31.580201    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	W0910 17:30:31.581861    8286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0910 17:30:31.581884    8286 retry.go:31] will retry after 281.737112ms: ssh: handshake failed: EOF
	I0910 17:30:31.646553    8286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:31.646673    8286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:32.093359    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:32.187619    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:32.192519    8286 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:32.192583    8286 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:32.231019    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:32.278311    8286 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:32.278379    8286 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:32.315419    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:32.435283    8286 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:32.435312    8286 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:32.474169    8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:32.474197    8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:32.533873    8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:32.533900    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:32.548914    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:32.672889    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:32.672917    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:32.775066    8286 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:32.775092    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:32.795219    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:32.895535    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:32.939449    8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:32.939493    8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:32.958445    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:32.964923    8286 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:32.964959    8286 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:33.025393    8286 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:33.025422    8286 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:33.076387    8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:33.076416    8286 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:33.095722    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:33.095750    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:33.126441    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:33.143634    8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:33.143677    8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:33.194838    8286 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:33.194875    8286 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:33.241336    8286 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:33.241365    8286 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:33.284412    8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:33.284461    8286 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:33.313149    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:33.313189    8286 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:33.351940    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:33.351968    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:33.474707    8286 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:33.474734    8286 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:33.516882    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:33.520092    8286 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:33.520117    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:33.539187    8286 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:33.539215    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:33.620495    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:33.620524    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:33.764929    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:33.821062    8286 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:33.821088    8286 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:33.865660    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:33.996187    8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:33.996229    8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:34.038884    8286 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:34.038913    8286 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:34.070688    8286 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.423991358s)
	I0910 17:30:34.070819    8286 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.424240908s)
	I0910 17:30:34.070840    8286 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:34.072922    8286 node_ready.go:35] waiting up to 6m0s for node "addons-018527" to be "Ready" ...
	I0910 17:30:34.082425    8286 node_ready.go:49] node "addons-018527" has status "Ready":"True"
	I0910 17:30:34.082465    8286 node_ready.go:38] duration metric: took 9.385923ms for node "addons-018527" to be "Ready" ...
	I0910 17:30:34.082476    8286 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:34.116142    8286 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:34.335095    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:34.335172    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:34.359243    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:34.359308    8286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:34.477284    8286 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:34.477360    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:34.565564    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:34.565636    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:34.575079    8286 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-018527" context rescaled to 1 replicas
	I0910 17:30:34.683409    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:35.059234    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:35.059322    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:35.583329    8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:35.583374    8286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:35.632798    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.539370875s)
	I0910 17:30:35.632868    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.445027959s)
	I0910 17:30:36.098826    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:36.149444    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:38.259139    8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:38.259255    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:38.288300    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:38.661649    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:39.246317    8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:39.760533    8286 addons.go:234] Setting addon gcp-auth=true in "addons-018527"
	I0910 17:30:39.760600    8286 host.go:66] Checking if "addons-018527" exists ...
	I0910 17:30:39.761164    8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
	I0910 17:30:39.785464    8286 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:39.785523    8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
	I0910 17:30:39.818425    8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
	I0910 17:30:41.123196    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:43.128156    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:44.448672    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.217579819s)
	I0910 17:30:44.448737    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.133294701s)
	I0910 17:30:44.448780    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.899842557s)
	I0910 17:30:44.448919    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.653677065s)
	I0910 17:30:44.449043    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.55348521s)
	I0910 17:30:44.449075    8286 addons.go:475] Verifying addon ingress=true in "addons-018527"
	I0910 17:30:44.449313    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.490842863s)
	I0910 17:30:44.449547    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.323076464s)
	I0910 17:30:44.449561    8286 addons.go:475] Verifying addon registry=true in "addons-018527"
	I0910 17:30:44.449901    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.932960065s)
	I0910 17:30:44.449921    8286 addons.go:475] Verifying addon metrics-server=true in "addons-018527"
	I0910 17:30:44.450004    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.685047245s)
	W0910 17:30:44.450022    8286 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:44.450037    8286 retry.go:31] will retry after 364.686143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:44.450075    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.584388512s)
	I0910 17:30:44.450180    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.76667864s)
	I0910 17:30:44.453780    8286 out.go:177] * Verifying registry addon...
	I0910 17:30:44.454959    8286 out.go:177] * Verifying ingress addon...
	I0910 17:30:44.457015    8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:44.457244    8286 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-018527 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:44.458159    8286 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:30:44.526069    8286 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:30:44.526095    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.530963    8286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:44.531047    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.815144    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:44.966090    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.967133    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.173700    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:45.471278    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.471481    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.577219    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.478332779s)
	I0910 17:30:45.577249    8286 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-018527"
	I0910 17:30:45.577499    8286 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.792006726s)
	I0910 17:30:45.580172    8286 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:45.580306    8286 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:45.583167    8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:45.585542    8286 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:45.587622    8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:45.587696    8286 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:45.592973    8286 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:45.592997    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.696065    8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:45.696140    8286 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:45.744261    8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:45.744331    8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:45.812237    8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:45.965723    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.967001    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.122864    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.463742    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.464657    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.589474    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.964579    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.965420    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.088821    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.307816    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492625167s)
	I0910 17:30:47.360672    8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.548399838s)
	I0910 17:30:47.363475    8286 addons.go:475] Verifying addon gcp-auth=true in "addons-018527"
	I0910 17:30:47.366126    8286 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:47.369344    8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:47.373208    8286 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:47.476081    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.478322    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.589733    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.622918    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:47.963519    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.964906    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.089272    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.464213    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.464718    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.587589    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.961174    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.963704    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.088367    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.462047    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.462364    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.589410    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.623554    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:49.974998    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.976020    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.090688    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.460880    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.462670    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.589197    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.961839    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.975671    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.089882    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.463394    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.464821    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.588727    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.961244    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.963304    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.087693    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.122936    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:52.462481    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.464225    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.588996    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.961126    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.963647    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.088029    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.460579    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.463011    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.589522    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.962395    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.963354    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.089295    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.123750    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:54.461918    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.463530    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.587963    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.962401    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.963498    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.096437    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.462635    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:55.463036    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.588690    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.963096    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:55.964995    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.091545    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.475505    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:56.476636    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.588720    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.623307    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:56.961425    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:56.962853    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.089200    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.461351    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:57.463324    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.587895    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.963472    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.964436    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:58.088552    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.462881    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.463439    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:58.587897    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.961809    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:58.964988    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.088449    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.122246    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:59.464786    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.466700    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:59.588534    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.961289    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:59.963835    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.125761    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.497784    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:00.498969    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.612237    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.961698    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:00.965063    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.088921    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.122578    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:31:01.462278    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:01.463531    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.588038    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.962502    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:01.963449    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.089079    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.477081    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.478240    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:02.588951    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.963627    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.964240    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:03.089235    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.123567    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:31:03.476244    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.477100    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:03.587949    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.964768    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.965324    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:04.092750    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.466532    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:04.468112    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.587781    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.966446    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.968174    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:05.089943    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.126548    8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
	I0910 17:31:05.475114    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.476270    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:05.588496    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.964570    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:05.965956    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.088495    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.124110    8286 pod_ready.go:93] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.124184    8286 pod_ready.go:82] duration metric: took 32.007965274s for pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.124212    8286 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.126893    8286 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-zrdzw" not found
	I0910 17:31:06.126965    8286 pod_ready.go:82] duration metric: took 2.731957ms for pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace to be "Ready" ...
	E0910 17:31:06.126990    8286 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-zrdzw" not found
	I0910 17:31:06.127011    8286 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.134048    8286 pod_ready.go:93] pod "etcd-addons-018527" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.134121    8286 pod_ready.go:82] duration metric: took 7.076185ms for pod "etcd-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.134162    8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.142151    8286 pod_ready.go:93] pod "kube-apiserver-addons-018527" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.142224    8286 pod_ready.go:82] duration metric: took 8.035346ms for pod "kube-apiserver-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.142251    8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.149325    8286 pod_ready.go:93] pod "kube-controller-manager-addons-018527" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.149397    8286 pod_ready.go:82] duration metric: took 7.123462ms for pod "kube-controller-manager-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.149424    8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdjgm" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.320729    8286 pod_ready.go:93] pod "kube-proxy-xdjgm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.320754    8286 pod_ready.go:82] duration metric: took 171.309068ms for pod "kube-proxy-xdjgm" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.320768    8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.463428    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.465596    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:06.588811    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.720162    8286 pod_ready.go:93] pod "kube-scheduler-addons-018527" in "kube-system" namespace has status "Ready":"True"
	I0910 17:31:06.720190    8286 pod_ready.go:82] duration metric: took 399.414174ms for pod "kube-scheduler-addons-018527" in "kube-system" namespace to be "Ready" ...
	I0910 17:31:06.720201    8286 pod_ready.go:39] duration metric: took 32.637713929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:31:06.720219    8286 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:31:06.720303    8286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:31:06.737642    8286 api_server.go:72] duration metric: took 35.928295345s to wait for apiserver process to appear ...
	I0910 17:31:06.737707    8286 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:31:06.737739    8286 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0910 17:31:06.745435    8286 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0910 17:31:06.746720    8286 api_server.go:141] control plane version: v1.31.0
	I0910 17:31:06.746760    8286 api_server.go:131] duration metric: took 9.033185ms to wait for apiserver health ...
	I0910 17:31:06.746770    8286 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:31:06.929627    8286 system_pods.go:59] 17 kube-system pods found
	I0910 17:31:06.929664    8286 system_pods.go:61] "coredns-6f6b679f8f-sdtps" [583b5997-bafc-4b57-aa34-d00095de4aed] Running
	I0910 17:31:06.929676    8286 system_pods.go:61] "csi-hostpath-attacher-0" [b45ababd-630f-4f31-b7c7-7fd839c504cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:31:06.929685    8286 system_pods.go:61] "csi-hostpath-resizer-0" [184c911a-dd86-4ff9-9655-d1ffd869d1dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:31:06.929693    8286 system_pods.go:61] "csi-hostpathplugin-mvsrq" [55ab278e-003b-4eb9-9120-9068d57eef7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:31:06.929699    8286 system_pods.go:61] "etcd-addons-018527" [8d8ee3e5-02ae-446e-a059-3ae8eb68c5ba] Running
	I0910 17:31:06.929704    8286 system_pods.go:61] "kube-apiserver-addons-018527" [a56e3878-9412-4e3a-b75f-289231338059] Running
	I0910 17:31:06.929708    8286 system_pods.go:61] "kube-controller-manager-addons-018527" [f6e7221b-6c18-49d0-8a91-b41b70e5b6fc] Running
	I0910 17:31:06.929718    8286 system_pods.go:61] "kube-ingress-dns-minikube" [d807dd65-94ff-458f-90b4-26a6a55d5921] Running
	I0910 17:31:06.929722    8286 system_pods.go:61] "kube-proxy-xdjgm" [f303e3f2-d196-448d-ac3a-965a45fc9253] Running
	I0910 17:31:06.929732    8286 system_pods.go:61] "kube-scheduler-addons-018527" [a8cc5199-4392-4108-9e86-e2e08078002b] Running
	I0910 17:31:06.929739    8286 system_pods.go:61] "metrics-server-84c5f94fbc-m4w8v" [c99d7e90-85cb-445e-9f15-c2a13cc75a7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:31:06.929755    8286 system_pods.go:61] "nvidia-device-plugin-daemonset-nzqkz" [8e4852cb-f95d-48ef-a74c-8da89946c2d5] Running
	I0910 17:31:06.929776    8286 system_pods.go:61] "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
	I0910 17:31:06.929783    8286 system_pods.go:61] "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:31:06.929790    8286 system_pods.go:61] "snapshot-controller-56fcc65765-bdvsv" [74ef0080-01a0-4ef4-9976-b7e370436ce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:31:06.929797    8286 system_pods.go:61] "snapshot-controller-56fcc65765-w5wvl" [24868c11-9966-48f6-9256-9a010dfd0cec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:31:06.929803    8286 system_pods.go:61] "storage-provisioner" [62081ed1-b8d0-41d3-b12b-49d7ae204d60] Running
	I0910 17:31:06.929819    8286 system_pods.go:74] duration metric: took 183.042342ms to wait for pod list to return data ...
	I0910 17:31:06.929833    8286 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:31:06.960879    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:06.962767    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.088128    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.120491    8286 default_sa.go:45] found service account: "default"
	I0910 17:31:07.120518    8286 default_sa.go:55] duration metric: took 190.677803ms for default service account to be created ...
	I0910 17:31:07.120528    8286 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:31:07.327521    8286 system_pods.go:86] 17 kube-system pods found
	I0910 17:31:07.327556    8286 system_pods.go:89] "coredns-6f6b679f8f-sdtps" [583b5997-bafc-4b57-aa34-d00095de4aed] Running
	I0910 17:31:07.327566    8286 system_pods.go:89] "csi-hostpath-attacher-0" [b45ababd-630f-4f31-b7c7-7fd839c504cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:31:07.327583    8286 system_pods.go:89] "csi-hostpath-resizer-0" [184c911a-dd86-4ff9-9655-d1ffd869d1dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:31:07.327594    8286 system_pods.go:89] "csi-hostpathplugin-mvsrq" [55ab278e-003b-4eb9-9120-9068d57eef7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:31:07.327602    8286 system_pods.go:89] "etcd-addons-018527" [8d8ee3e5-02ae-446e-a059-3ae8eb68c5ba] Running
	I0910 17:31:07.327608    8286 system_pods.go:89] "kube-apiserver-addons-018527" [a56e3878-9412-4e3a-b75f-289231338059] Running
	I0910 17:31:07.327616    8286 system_pods.go:89] "kube-controller-manager-addons-018527" [f6e7221b-6c18-49d0-8a91-b41b70e5b6fc] Running
	I0910 17:31:07.327621    8286 system_pods.go:89] "kube-ingress-dns-minikube" [d807dd65-94ff-458f-90b4-26a6a55d5921] Running
	I0910 17:31:07.327626    8286 system_pods.go:89] "kube-proxy-xdjgm" [f303e3f2-d196-448d-ac3a-965a45fc9253] Running
	I0910 17:31:07.327633    8286 system_pods.go:89] "kube-scheduler-addons-018527" [a8cc5199-4392-4108-9e86-e2e08078002b] Running
	I0910 17:31:07.327639    8286 system_pods.go:89] "metrics-server-84c5f94fbc-m4w8v" [c99d7e90-85cb-445e-9f15-c2a13cc75a7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:31:07.327654    8286 system_pods.go:89] "nvidia-device-plugin-daemonset-nzqkz" [8e4852cb-f95d-48ef-a74c-8da89946c2d5] Running
	I0910 17:31:07.327664    8286 system_pods.go:89] "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
	I0910 17:31:07.327671    8286 system_pods.go:89] "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:31:07.327677    8286 system_pods.go:89] "snapshot-controller-56fcc65765-bdvsv" [74ef0080-01a0-4ef4-9976-b7e370436ce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:31:07.327686    8286 system_pods.go:89] "snapshot-controller-56fcc65765-w5wvl" [24868c11-9966-48f6-9256-9a010dfd0cec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:31:07.327693    8286 system_pods.go:89] "storage-provisioner" [62081ed1-b8d0-41d3-b12b-49d7ae204d60] Running
	I0910 17:31:07.327702    8286 system_pods.go:126] duration metric: took 207.16867ms to wait for k8s-apps to be running ...
	I0910 17:31:07.327714    8286 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:31:07.327780    8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:31:07.341761    8286 system_svc.go:56] duration metric: took 14.039295ms WaitForService to wait for kubelet
	I0910 17:31:07.341800    8286 kubeadm.go:582] duration metric: took 36.53244877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:31:07.341821    8286 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:31:07.464804    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:07.464994    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.521712    8286 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0910 17:31:07.521753    8286 node_conditions.go:123] node cpu capacity is 2
	I0910 17:31:07.521768    8286 node_conditions.go:105] duration metric: took 179.94145ms to run NodePressure ...
	I0910 17:31:07.521781    8286 start.go:241] waiting for startup goroutines ...
	I0910 17:31:07.588536    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.963914    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:07.964230    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.090944    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.463660    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.464129    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:08.588290    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.964809    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.966268    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:09.088499    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.461858    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:09.463050    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.590154    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.966442    8286 kapi.go:107] duration metric: took 25.509421396s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:31:09.967741    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.090187    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.465605    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.588962    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.972639    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.090100    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.467671    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.598116    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.965319    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.095506    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.462701    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.589214    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.963813    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.092374    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.466824    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.588597    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.963840    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.090057    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.463548    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.589397    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.962963    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.102486    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.463567    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.587988    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.962904    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.088470    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.466621    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.589482    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.963047    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.088506    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.462916    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.588848    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.963015    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.090414    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:18.475950    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.595329    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:18.963071    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.100137    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:19.474903    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.589792    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:19.966061    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.088592    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:20.478523    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.594926    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:20.975676    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.089474    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:21.463671    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.588175    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:21.962717    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.088476    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:22.463069    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.588234    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:22.963179    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.095243    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:23.468350    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.588304    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:23.963379    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.090862    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:24.462027    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.588632    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:24.975161    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.091220    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:25.462936    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.589048    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:25.962709    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.089329    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:26.464327    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.588087    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:26.979042    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.091104    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:27.464383    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.588509    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:27.962966    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.088254    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:28.476333    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.592457    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:28.962988    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.088522    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:29.468170    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.588232    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:29.963438    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.094903    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:30.463637    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.589433    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:30.963603    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:31.088821    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:31.462423    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:31.591288    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:31.963030    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:32.088019    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:32.464346    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:32.587956    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:32.963595    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:33.097131    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:33.475595    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:33.591192    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:33.963075    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:34.089576    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:34.463070    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:34.588515    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:34.976804    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:35.090318    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:35.476383    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:35.588972    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:35.964470    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:36.089667    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:36.463906    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:36.589433    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:36.963019    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:37.089909    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:37.464084    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:37.588243    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:37.963181    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:38.114587    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:38.477194    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:38.588818    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:38.963019    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:39.088815    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:39.466151    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:39.588083    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:39.963246    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:40.089935    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:40.478644    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:40.588195    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:40.962385    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:41.088553    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:41.463627    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:41.590876    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:41.963146    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:42.089899    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:42.462909    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:42.589099    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:42.963313    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:43.088474    8286 kapi.go:107] duration metric: took 57.505305538s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:31:43.462385    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:43.963379    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:44.462908    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:44.963699    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:45.472960    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:45.966067    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:46.463570    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:46.975712    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:47.463499    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:47.963548    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:48.462583    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:48.964027    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:49.463610    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:49.964356    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:50.474230    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:50.963098    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:51.462822    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:51.963160    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:52.464085    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:52.975175    8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:53.464771    8286 kapi.go:107] duration metric: took 1m9.006606038s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:32:09.382647    8286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:32:09.382669    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:09.874444    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:10.374076    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:10.872775    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:11.373263    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:11.873867    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:12.373674    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:12.873101    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:13.373141    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:13.873674    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:14.373829    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:14.872916    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:15.373119    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:15.874438    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:16.372898    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:16.872759    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:17.373477    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:17.873892    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:18.374297    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:18.876426    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:19.373754    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:19.873711    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:20.373851    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:20.872330    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:21.373470    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:21.873806    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:22.373542    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:22.872921    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:23.373143    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:23.873130    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:24.372536    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:24.873324    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:25.373428    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:25.874227    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:26.373289    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:26.873591    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:27.374039    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:27.873689    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:28.373740    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:28.872872    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:29.372699    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:29.873749    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:30.373224    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:30.874594    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:31.373345    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:31.873497    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:32.373178    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:32.873542    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:33.373984    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:33.873202    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:34.372917    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:34.873145    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:35.372896    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:35.873237    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:36.373500    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:36.872787    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:37.373476    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:37.873650    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:38.373750    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:38.873533    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:39.373558    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:39.872672    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:40.373665    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:40.873496    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:41.373930    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:41.873257    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:42.373808    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:42.872537    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:43.373439    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:43.873271    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:44.373310    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:44.873758    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:45.376331    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:45.873410    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:46.373892    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:46.873503    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:47.372747    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:47.873051    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:48.373445    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:48.873173    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:49.372463    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:49.873376    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:50.387059    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:50.873468    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:51.374023    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:51.873445    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:52.373623    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:52.874235    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:53.372953    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:53.873781    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:54.372719    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:54.873456    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:55.373820    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:55.872510    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:56.374025    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:56.874275    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:57.372942    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:57.872797    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:58.373107    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:58.874273    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:59.373343    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:59.873246    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:00.386589    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:00.872933    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:01.374477    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:01.873391    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:02.373771    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:02.873816    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:03.373079    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:03.873616    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:04.374298    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:04.873586    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:05.372754    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:05.873743    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:06.372847    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:06.873328    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:07.373087    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:07.873252    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:08.373114    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:08.873225    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:09.373049    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:09.872416    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:10.373471    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:10.873746    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:11.373751    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:11.872493    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:12.374139    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:12.872837    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:13.374573    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:13.873633    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:14.373604    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:14.873398    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:15.373328    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:15.873023    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:16.373219    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:16.873875    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:17.373716    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:17.874400    8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:18.373781    8286 kapi.go:107] duration metric: took 2m31.004438367s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:33:18.375990    8286 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-018527 cluster.
	I0910 17:33:18.378135    8286 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:33:18.380286    8286 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:33:18.383530    8286 out.go:177] * Enabled addons: ingress-dns, default-storageclass, volcano, nvidia-device-plugin, cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0910 17:33:18.386089    8286 addons.go:510] duration metric: took 2m47.576383773s for enable addons: enabled=[ingress-dns default-storageclass volcano nvidia-device-plugin cloud-spanner storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0910 17:33:18.386140    8286 start.go:246] waiting for cluster config update ...
	I0910 17:33:18.386161    8286 start.go:255] writing updated cluster config ...
	I0910 17:33:18.386510    8286 ssh_runner.go:195] Run: rm -f paused
	I0910 17:33:18.743245    8286 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:33:18.745493    8286 out.go:177] * Done! kubectl is now configured to use "addons-018527" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.192195524Z" level=info msg="ignoring event" container=5f97100224f9fb78aea0cc821bc7b77e9bd12d7c55a47e61bbb1c6b3ddffe8b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.282013539Z" level=info msg="ignoring event" container=1021e2600f7559a80cb94cd9a9fd67b3e0e2ad76789d3e74487d310754f45c56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.494954183Z" level=info msg="ignoring event" container=102ab856c08ffb9f7282a4dc8eb8ec63e7f03e4739f728e2462692847ca24826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.519098542Z" level=info msg="ignoring event" container=7c772b5f1cbaee8dcfae65ff4af5453b21125bd74eb6069188af2c3eaff22931 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:40 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:42:40Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 10 17:42:42 addons-018527 dockerd[1277]: time="2024-09-10T17:42:42.548395852Z" level=info msg="ignoring event" container=8b61e9135207071e3c8eb78d69e6c7e0ad7ce3dbc023ac2901a2dd2b28c83311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:46 addons-018527 dockerd[1277]: time="2024-09-10T17:42:46.287111939Z" level=info msg="ignoring event" container=f0e4180102f491674b2b31fce8dd7d3e509b2c47b933a26cad5e4be0a322b66d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:46 addons-018527 dockerd[1277]: time="2024-09-10T17:42:46.404939487Z" level=info msg="ignoring event" container=d3d87b82603a9ea8d3f6f5edb9b6d47378d03efa70d59e7597042ef38c7a2ad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:51 addons-018527 dockerd[1277]: time="2024-09-10T17:42:51.988495030Z" level=info msg="ignoring event" container=13553bd588541fa0615d04fd6d9eb74e53fa34890d3e1ba6a6c67937a02484ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:42:55 addons-018527 dockerd[1277]: time="2024-09-10T17:42:55.013798012Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:42:55 addons-018527 dockerd[1277]: time="2024-09-10T17:42:55.041868674Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:42:58 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:42:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b95e5012cb085e08cf25502a3a4faacd857b944536ef10f2154dbd10f5c26bcc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 10 17:43:00 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:00Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 10 17:43:08 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e74703babb18966076cd24160ddfdf272640c052c86ac897099e9220d07d7303/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 10 17:43:08 addons-018527 dockerd[1277]: time="2024-09-10T17:43:08.601188836Z" level=info msg="ignoring event" container=9b30e5a78810e1a375a6a96b28730afb7ae1e914835561443cf4346349c267cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:08 addons-018527 dockerd[1277]: time="2024-09-10T17:43:08.686688112Z" level=info msg="ignoring event" container=0d4493a49450b050fe3900dd41f1e15db01177393497fc8f82799b8b87386cb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:08 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:08Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.078237706Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b4a43fc30d3d00b4424cf2f28d9c2189293f6ba52242815096dfa5d83311e7a1
	Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.133824510Z" level=info msg="ignoring event" container=b4a43fc30d3d00b4424cf2f28d9c2189293f6ba52242815096dfa5d83311e7a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.286633991Z" level=info msg="ignoring event" container=5c41b94dcc61a7e1aa5129dd31bcd08af4b50b51aea193835be5f23aac6b32bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:15 addons-018527 dockerd[1277]: time="2024-09-10T17:43:15.479888749Z" level=info msg="ignoring event" container=50f7cf17b99f106ad07eff4d119219ed13836435f5f1129c6558a6553501ccc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.084666852Z" level=info msg="ignoring event" container=1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.210991202Z" level=info msg="ignoring event" container=fcc9398f81a125dba8d2ec3f9571af37a38e16c0e9fe162bf40e7564d804f5a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.303987162Z" level=info msg="ignoring event" container=49f4f15f4ca83135dbb5373d73fee969b523b6ac62dd8c24b880c71a27aeeb78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.542767130Z" level=info msg="ignoring event" container=bf114e40200f2370691dbbff7125ec43884c7d4cc069efdca30c74807cd659da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	7156cc762ae99       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  9 seconds ago       Running             hello-world-app            0                   e74703babb189       hello-world-app-55bf9c44b4-72dss
	66fb7448e9e6c       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                17 seconds ago      Running             nginx                      0                   b95e5012cb085       nginx
	deaaecc75b0cf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   e8b96bfa37aa2       gcp-auth-89d5ffd79-sxmzn
	1fc45eb577618       420193b27261a                                                                                                                11 minutes ago      Exited              patch                      1                   95c3e47956610       ingress-nginx-admission-patch-gtv72
	a0cf5e63c2f6c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   041cfd938d363       ingress-nginx-admission-create-bctfp
	c4da8fea539f6       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   2a0f46019df02       yakd-dashboard-67d98fc6b-2z6kf
	f40728ed4393f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   e0063888b72a1       local-path-provisioner-86d989889c-xqkbq
	fcc9398f81a12       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   bf114e40200f2       registry-proxy-g99fs
	5b05093918ea6       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   c6835c0b4e909       cloud-spanner-emulator-769b77f747-5jq7w
	4385c5ddb93f1       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   c5b0ddd65d8b3       nvidia-device-plugin-daemonset-nzqkz
	3ae63920702c2       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   98214d0b1fe3f       storage-provisioner
	da7987e7a97f9       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   25e0c9c442a06       coredns-6f6b679f8f-sdtps
	039472306dbd6       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   4b79eb5a3b276       kube-proxy-xdjgm
	0355e6ec34842       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   c82e990e69829       etcd-addons-018527
	a4dbcd9b4921b       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver             0                   1e1d54198a857       kube-apiserver-addons-018527
	4124297d4675f       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   8428168c70e9c       kube-scheduler-addons-018527
	48092f20ec16c       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   2a37be2c30848       kube-controller-manager-addons-018527
	
	
	==> coredns [da7987e7a97f] <==
	[INFO] 10.244.0.21:46615 - 3681 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000151827s
	[INFO] 10.244.0.21:46615 - 50661 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000457862s
	[INFO] 10.244.0.21:46615 - 17502 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000326654s
	[INFO] 10.244.0.21:46615 - 17156 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000140882s
	[INFO] 10.244.0.21:46615 - 22861 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001521155s
	[INFO] 10.244.0.21:56741 - 56204 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000173915s
	[INFO] 10.244.0.21:46615 - 15934 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005144893s
	[INFO] 10.244.0.21:37711 - 58923 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000177698s
	[INFO] 10.244.0.21:46615 - 29702 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143286s
	[INFO] 10.244.0.21:37711 - 7022 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071647s
	[INFO] 10.244.0.21:56741 - 6687 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042962s
	[INFO] 10.244.0.21:56741 - 26371 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0001259s
	[INFO] 10.244.0.21:37711 - 64365 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049633s
	[INFO] 10.244.0.21:56741 - 32583 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000124866s
	[INFO] 10.244.0.21:37711 - 45083 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000371159s
	[INFO] 10.244.0.21:56741 - 32129 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000277547s
	[INFO] 10.244.0.21:37711 - 18429 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064763s
	[INFO] 10.244.0.21:37711 - 39227 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000129756s
	[INFO] 10.244.0.21:56741 - 6964 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074905s
	[INFO] 10.244.0.21:56741 - 44958 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002550805s
	[INFO] 10.244.0.21:37711 - 19458 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001540157s
	[INFO] 10.244.0.21:56741 - 43079 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002301485s
	[INFO] 10.244.0.21:37711 - 64251 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010764904s
	[INFO] 10.244.0.21:56741 - 24811 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064278s
	[INFO] 10.244.0.21:37711 - 26496 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101653s
	
	
	==> describe nodes <==
	Name:               addons-018527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-018527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-018527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-018527
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-018527
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:43:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:39:06 +0000   Tue, 10 Sep 2024 17:30:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:39:06 +0000   Tue, 10 Sep 2024 17:30:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:39:06 +0000   Tue, 10 Sep 2024 17:30:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:39:06 +0000   Tue, 10 Sep 2024 17:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-018527
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 efffff036ca740c4bf5a4a66d6c81e7f
	  System UUID:                63da9386-f453-442b-9310-01906323f05d
	  Boot ID:                    5dfcb38b-fd71-4dbc-a44d-87cb8fa8678e
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-5jq7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-72dss           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  gcp-auth                    gcp-auth-89d5ffd79-sxmzn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-sdtps                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-018527                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-018527               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-018527      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xdjgm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-018527               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-nzqkz       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xqkbq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2z6kf             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (3%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-018527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-018527 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-018527 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-018527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-018527 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-018527 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-018527 event: Registered Node addons-018527 in Controller
	  Normal   CIDRAssignmentFailed     12m                cidrAllocator    Node addons-018527 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[Sep10 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014929] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.479642] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.766149] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.162243] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [0355e6ec3484] <==
	{"level":"info","ts":"2024-09-10T17:30:20.363702Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-10T17:30:20.364480Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-10T17:30:20.998373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T17:30:20.998608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T17:30:20.998771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-10T17:30:20.998922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T17:30:20.999073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-10T17:30:20.999204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-10T17:30:20.999326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-10T17:30:21.000857Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:30:21.003974Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-018527 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T17:30:21.004336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:30:21.004834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:30:21.005171Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:30:21.005391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:30:21.005507Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:30:21.006471Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:30:21.007714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-10T17:30:21.015368Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:30:21.016567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T17:30:21.016805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T17:30:21.022363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T17:40:21.095989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1883}
	{"level":"info","ts":"2024-09-10T17:40:21.143940Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1883,"took":"46.489615ms","hash":1121580216,"current-db-size-bytes":9035776,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5070848,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-10T17:40:21.143993Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1121580216,"revision":1883,"compact-revision":-1}
	
	
	==> gcp-auth [deaaecc75b0c] <==
	2024/09/10 17:33:17 GCP Auth Webhook started!
	2024/09/10 17:33:35 Ready to marshal response ...
	2024/09/10 17:33:35 Ready to write response ...
	2024/09/10 17:33:35 Ready to marshal response ...
	2024/09/10 17:33:35 Ready to write response ...
	2024/09/10 17:34:00 Ready to marshal response ...
	2024/09/10 17:34:00 Ready to write response ...
	2024/09/10 17:34:00 Ready to marshal response ...
	2024/09/10 17:34:00 Ready to write response ...
	2024/09/10 17:34:00 Ready to marshal response ...
	2024/09/10 17:34:00 Ready to write response ...
	2024/09/10 17:42:09 Ready to marshal response ...
	2024/09/10 17:42:09 Ready to write response ...
	2024/09/10 17:42:15 Ready to marshal response ...
	2024/09/10 17:42:15 Ready to write response ...
	2024/09/10 17:42:21 Ready to marshal response ...
	2024/09/10 17:42:21 Ready to write response ...
	2024/09/10 17:42:57 Ready to marshal response ...
	2024/09/10 17:42:57 Ready to write response ...
	2024/09/10 17:43:07 Ready to marshal response ...
	2024/09/10 17:43:07 Ready to write response ...
	2024/09/10 17:43:17 Ready to marshal response ...
	2024/09/10 17:43:17 Ready to write response ...
	2024/09/10 17:43:17 Ready to marshal response ...
	2024/09/10 17:43:17 Ready to write response ...
	
	
	==> kernel <==
	 17:43:17 up 25 min,  0 users,  load average: 0.82, 0.82, 0.75
	Linux addons-018527 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [a4dbcd9b4921] <==
	I0910 17:33:51.384011       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0910 17:33:51.520525       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0910 17:33:51.828059       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0910 17:33:51.841981       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0910 17:33:51.898238       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0910 17:33:51.979575       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0910 17:33:52.398396       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0910 17:33:52.565453       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0910 17:42:16.590885       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0910 17:42:37.987561       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:37.987617       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:38.023679       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:38.024044       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:38.047826       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:38.047914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:38.200291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:38.200343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:42:39.032236       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0910 17:42:39.200769       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0910 17:42:39.234789       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0910 17:42:51.904468       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0910 17:42:53.044028       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0910 17:42:57.597785       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0910 17:42:57.923434       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.189.109"}
	I0910 17:43:07.600575       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.193.86"}
	
	
	==> kube-controller-manager [48092f20ec16] <==
	I0910 17:43:01.073855       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0910 17:43:01.073918       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 17:43:02.166146       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0910 17:43:03.020088       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:03.020132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:03.495551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:03.495593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:07.434949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.960852ms"
	I0910 17:43:07.443341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.342573ms"
	I0910 17:43:07.443463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="83.241µs"
	I0910 17:43:07.463162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.872µs"
	W0910 17:43:08.045431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:08.045524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:09.411783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.303899ms"
	I0910 17:43:09.411854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.009µs"
	I0910 17:43:09.992125       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0910 17:43:09.996764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.448µs"
	I0910 17:43:09.998756       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0910 17:43:11.243948       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:11.244003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:11.582105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:11.582149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:13.501367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:13.501429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:16.012204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="18.831µs"
	
	
	==> kube-proxy [039472306dbd] <==
	I0910 17:30:32.373714       1 server_linux.go:66] "Using iptables proxy"
	I0910 17:30:32.580649       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0910 17:30:32.580727       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:32.612097       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0910 17:30:32.612175       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:32.617465       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:32.617912       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:32.617930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:32.619549       1 config.go:197] "Starting service config controller"
	I0910 17:30:32.619574       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:32.619596       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:32.619601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:32.624262       1 config.go:326] "Starting node config controller"
	I0910 17:30:32.624293       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:32.721993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:30:32.722062       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:32.728680       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4124297d4675] <==
	W0910 17:30:23.278712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:23.278753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:23.278808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:23.278820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:23.278976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:23.278993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.104346       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:30:24.104622       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:30:24.134528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:24.134644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.243702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:24.243744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.260935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 17:30:24.260983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.323350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:30:24.323635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.355989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:30:24.356261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.376994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:24.377036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.435413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:24.435635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:24.473199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:24.473402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:26.647918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.560507    2338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bsv8w\" (UniqueName: \"kubernetes.io/projected/4ac3168f-0bcd-4153-867b-4c58e4383c15-kube-api-access-bsv8w\") on node \"addons-018527\" DevicePath \"\""
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.613074    2338 scope.go:117] "RemoveContainer" containerID="1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
	Sep 10 17:43:16 addons-018527 kubelet[2338]: E0910 17:43:16.614564    2338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e" containerID="1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.614664    2338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"} err="failed to get container status \"1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.863540    2338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8ck\" (UniqueName: \"kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck\") pod \"9ffaedc2-7aad-4454-b435-9dc17bafb9aa\" (UID: \"9ffaedc2-7aad-4454-b435-9dc17bafb9aa\") "
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.876490    2338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck" (OuterVolumeSpecName: "kube-api-access-pf8ck") pod "9ffaedc2-7aad-4454-b435-9dc17bafb9aa" (UID: "9ffaedc2-7aad-4454-b435-9dc17bafb9aa"). InnerVolumeSpecName "kube-api-access-pf8ck". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.964398    2338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pf8ck\" (UniqueName: \"kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck\") on node \"addons-018527\" DevicePath \"\""
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.718502    2338 scope.go:117] "RemoveContainer" containerID="fcc9398f81a125dba8d2ec3f9571af37a38e16c0e9fe162bf40e7564d804f5a6"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.890588    2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13adb992-3006-4269-bc2a-255a3908ac95" path="/var/lib/kubelet/pods/13adb992-3006-4269-bc2a-255a3908ac95/volumes"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.891155    2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" path="/var/lib/kubelet/pods/4ac3168f-0bcd-4153-867b-4c58e4383c15/volumes"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.891542    2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" path="/var/lib/kubelet/pods/9ffaedc2-7aad-4454-b435-9dc17bafb9aa/volumes"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893363    2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" containerName="registry"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893398    2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9aa821c4-0762-41d7-918a-69e014935d35" containerName="controller"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893409    2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d807dd65-94ff-458f-90b4-26a6a55d5921" containerName="minikube-ingress-dns"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893416    2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" containerName="registry-proxy"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893425    2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d254e894-5053-48de-8b53-ba82389fc06c" containerName="gadget"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893461    2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa821c4-0762-41d7-918a-69e014935d35" containerName="controller"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893471    2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" containerName="registry-proxy"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893478    2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="d807dd65-94ff-458f-90b4-26a6a55d5921" containerName="minikube-ingress-dns"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893484    2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" containerName="registry"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893490    2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="d254e894-5053-48de-8b53-ba82389fc06c" containerName="gadget"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976667    2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdv4q\" (UniqueName: \"kubernetes.io/projected/2f450bd5-2701-449b-a573-739a33a2a558-kube-api-access-mdv4q\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976721    2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2f450bd5-2701-449b-a573-739a33a2a558-gcp-creds\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976755    2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2f450bd5-2701-449b-a573-739a33a2a558-script\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
	Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976780    2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2f450bd5-2701-449b-a573-739a33a2a558-data\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
	
	
	==> storage-provisioner [3ae63920702c] <==
	I0910 17:30:38.677855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:38.795132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:38.795202       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:38.835889       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:38.836066       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1!
	I0910 17:30:38.836990       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cee871a3-ba07-4699-8ab1-d63f5152f32e", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1 became leader
	I0910 17:30:38.936524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-018527 -n addons-018527
helpers_test.go:261: (dbg) Run:  kubectl --context addons-018527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d: exit status 1 (152.055734ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-018527/192.168.49.2
	Start Time:       Tue, 10 Sep 2024 17:34:00 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldgnv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ldgnv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m19s                  default-scheduler  Successfully assigned default/busybox to addons-018527
	  Normal   Pulling    7m48s (x4 over 9m18s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m18s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m18s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m35s (x6 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m7s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7428 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-m7428:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.17s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.05
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 60.07
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 222.2
29 TestAddons/serial/Volcano 41.31
31 TestAddons/serial/GCPAuth/Namespaces 0.54
34 TestAddons/parallel/Ingress 20.05
35 TestAddons/parallel/InspektorGadget 11.85
36 TestAddons/parallel/MetricsServer 6.99
39 TestAddons/parallel/CSI 34.38
40 TestAddons/parallel/Headlamp 16.74
41 TestAddons/parallel/CloudSpanner 6.5
42 TestAddons/parallel/LocalPath 53.71
43 TestAddons/parallel/NvidiaDevicePlugin 5.47
44 TestAddons/parallel/Yakd 11.67
45 TestAddons/StoppedEnableDisable 11.21
46 TestCertOptions 45.31
47 TestCertExpiration 247.21
48 TestDockerFlags 49.76
49 TestForceSystemdFlag 49.88
50 TestForceSystemdEnv 39.12
56 TestErrorSpam/setup 35.48
57 TestErrorSpam/start 0.9
58 TestErrorSpam/status 1.17
59 TestErrorSpam/pause 1.43
60 TestErrorSpam/unpause 1.54
61 TestErrorSpam/stop 2.11
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 73.32
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.39
68 TestFunctional/serial/KubeContext 0.24
69 TestFunctional/serial/KubectlGetPods 0.13
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
73 TestFunctional/serial/CacheCmd/cache/add_local 0.99
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 38.1
82 TestFunctional/serial/ComponentHealth 0.09
83 TestFunctional/serial/LogsCmd 1.21
84 TestFunctional/serial/LogsFileCmd 1.29
85 TestFunctional/serial/InvalidService 4.91
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 10.61
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.14
95 TestFunctional/parallel/ServiceCmdConnect 11.64
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 35.73
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 2.31
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.3
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
111 TestFunctional/parallel/License 0.29
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.47
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
124 TestFunctional/parallel/ServiceCmd/List 0.54
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
127 TestFunctional/parallel/ServiceCmd/Format 0.39
128 TestFunctional/parallel/ServiceCmd/URL 0.37
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
130 TestFunctional/parallel/ProfileCmd/profile_list 0.45
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
132 TestFunctional/parallel/MountCmd/any-port 9.24
133 TestFunctional/parallel/MountCmd/specific-port 1.44
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.89
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.13
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.36
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
142 TestFunctional/parallel/ImageCommands/Setup 0.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
145 TestFunctional/parallel/DockerEnv/bash 1.29
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 125.06
161 TestMultiControlPlane/serial/DeployApp 50.1
162 TestMultiControlPlane/serial/PingHostFromPods 1.78
163 TestMultiControlPlane/serial/AddWorkerNode 27.25
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
166 TestMultiControlPlane/serial/CopyFile 20.29
167 TestMultiControlPlane/serial/StopSecondaryNode 12.02
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 73.1
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.16
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 242.93
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.26
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/StopCluster 32.97
175 TestMultiControlPlane/serial/RestartCluster 86.36
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
177 TestMultiControlPlane/serial/AddSecondaryNode 46.54
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestImageBuild/serial/Setup 30.92
182 TestImageBuild/serial/NormalBuild 1.88
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.77
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.92
189 TestJSONOutput/start/Command 43.12
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.81
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.53
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.9
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 33.98
215 TestKicCustomNetwork/use_default_bridge_network 36.25
216 TestKicExistingNetwork 33.4
217 TestKicCustomSubnet 35.72
218 TestKicStaticIP 34.1
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 76.72
223 TestMountStart/serial/StartWithMountFirst 7.79
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 10.93
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.33
230 TestMountStart/serial/RestartStopped 8.81
231 TestMountStart/serial/VerifyMountPostStop 0.28
234 TestMultiNode/serial/FreshStart2Nodes 86.93
235 TestMultiNode/serial/DeployApp2Nodes 36.71
236 TestMultiNode/serial/PingHostFrom2Pods 1.03
237 TestMultiNode/serial/AddNode 21.17
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.44
240 TestMultiNode/serial/CopyFile 10.26
241 TestMultiNode/serial/StopNode 2.33
242 TestMultiNode/serial/StartAfterStop 10.94
243 TestMultiNode/serial/RestartKeepsNodes 98.66
244 TestMultiNode/serial/DeleteNode 5.76
245 TestMultiNode/serial/StopMultiNode 21.62
246 TestMultiNode/serial/RestartMultiNode 57.38
247 TestMultiNode/serial/ValidateNameConflict 35.45
252 TestPreload 102.78
254 TestScheduledStopUnix 104.33
255 TestSkaffold 120.23
257 TestInsufficientStorage 11.64
258 TestRunningBinaryUpgrade 107.42
260 TestKubernetesUpgrade 225.35
261 TestMissingContainerUpgrade 170.81
263 TestPause/serial/Start 83.93
264 TestPause/serial/SecondStartNoReconfiguration 35.75
265 TestPause/serial/Pause 0.82
266 TestPause/serial/VerifyStatus 0.38
267 TestPause/serial/Unpause 0.77
268 TestPause/serial/PauseAgain 0.85
269 TestPause/serial/DeletePaused 2.33
270 TestPause/serial/VerifyDeletedResources 0.45
271 TestStoppedBinaryUpgrade/Setup 0.91
272 TestStoppedBinaryUpgrade/Upgrade 83.7
273 TestStoppedBinaryUpgrade/MinikubeLogs 2.14
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
283 TestNoKubernetes/serial/StartWithK8s 40.63
295 TestNoKubernetes/serial/StartWithStopK8s 17.25
296 TestNoKubernetes/serial/Start 11.68
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
298 TestNoKubernetes/serial/ProfileList 2.91
299 TestNoKubernetes/serial/Stop 1.28
300 TestNoKubernetes/serial/StartNoArgs 8.73
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
303 TestStartStop/group/old-k8s-version/serial/FirstStart 174.3
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.64
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
306 TestStartStop/group/old-k8s-version/serial/Stop 10.94
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 145.51
310 TestStartStop/group/no-preload/serial/FirstStart 57.11
311 TestStartStop/group/no-preload/serial/DeployApp 9.44
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
313 TestStartStop/group/no-preload/serial/Stop 11.06
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 267.41
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
319 TestStartStop/group/old-k8s-version/serial/Pause 2.89
321 TestStartStop/group/embed-certs/serial/FirstStart 43.49
322 TestStartStop/group/embed-certs/serial/DeployApp 9.45
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
324 TestStartStop/group/embed-certs/serial/Stop 11.03
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
326 TestStartStop/group/embed-certs/serial/SecondStart 304.28
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
330 TestStartStop/group/no-preload/serial/Pause 2.92
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.04
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.85
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.22
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
341 TestStartStop/group/embed-certs/serial/Pause 2.98
343 TestStartStop/group/newest-cni/serial/FirstStart 35.72
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.54
346 TestStartStop/group/newest-cni/serial/Stop 9.61
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
348 TestStartStop/group/newest-cni/serial/SecondStart 18.22
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
352 TestStartStop/group/newest-cni/serial/Pause 3.25
353 TestNetworkPlugins/group/auto/Start 77.64
354 TestNetworkPlugins/group/auto/KubeletFlags 0.29
355 TestNetworkPlugins/group/auto/NetCatPod 11.32
356 TestNetworkPlugins/group/auto/DNS 0.18
357 TestNetworkPlugins/group/auto/Localhost 0.18
358 TestNetworkPlugins/group/auto/HairPin 0.18
359 TestNetworkPlugins/group/kindnet/Start 68.38
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.21
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.26
364 TestNetworkPlugins/group/calico/Start 74.53
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.4
368 TestNetworkPlugins/group/kindnet/DNS 0.24
369 TestNetworkPlugins/group/kindnet/Localhost 0.21
370 TestNetworkPlugins/group/kindnet/HairPin 0.23
371 TestNetworkPlugins/group/custom-flannel/Start 58.53
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.35
374 TestNetworkPlugins/group/calico/NetCatPod 12.4
375 TestNetworkPlugins/group/calico/DNS 0.3
376 TestNetworkPlugins/group/calico/Localhost 0.28
377 TestNetworkPlugins/group/calico/HairPin 0.19
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.6
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.43
380 TestNetworkPlugins/group/custom-flannel/DNS 0.27
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
383 TestNetworkPlugins/group/false/Start 87.15
384 TestNetworkPlugins/group/enable-default-cni/Start 85
385 TestNetworkPlugins/group/false/KubeletFlags 0.53
386 TestNetworkPlugins/group/false/NetCatPod 10.28
387 TestNetworkPlugins/group/false/DNS 0.18
388 TestNetworkPlugins/group/false/Localhost 0.19
389 TestNetworkPlugins/group/false/HairPin 0.18
390 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
391 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
392 TestNetworkPlugins/group/flannel/Start 63.52
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
396 TestNetworkPlugins/group/bridge/Start 56.64
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
399 TestNetworkPlugins/group/flannel/NetCatPod 12.37
400 TestNetworkPlugins/group/flannel/DNS 0.19
401 TestNetworkPlugins/group/flannel/Localhost 0.19
402 TestNetworkPlugins/group/flannel/HairPin 0.17
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
404 TestNetworkPlugins/group/bridge/NetCatPod 11.27
405 TestNetworkPlugins/group/bridge/DNS 0.29
406 TestNetworkPlugins/group/bridge/Localhost 0.28
407 TestNetworkPlugins/group/bridge/HairPin 0.25
408 TestNetworkPlugins/group/kubenet/Start 49.68
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
410 TestNetworkPlugins/group/kubenet/NetCatPod 9.27
411 TestNetworkPlugins/group/kubenet/DNS 0.18
412 TestNetworkPlugins/group/kubenet/Localhost 0.19
413 TestNetworkPlugins/group/kubenet/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (18.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-933311 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-933311 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.080482383s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-933311
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-933311: exit status 85 (73.094105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-933311 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |          |
	|         | -p download-only-933311        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:10.085155    7530 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:10.085304    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:10.085309    7530 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:10.085314    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:10.085673    7530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	W0910 17:29:10.085842    7530 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19598-2209/.minikube/config/config.json: open /home/jenkins/minikube-integration/19598-2209/.minikube/config/config.json: no such file or directory
	I0910 17:29:10.086303    7530 out.go:352] Setting JSON to true
	I0910 17:29:10.087200    7530 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":697,"bootTime":1725988653,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0910 17:29:10.087288    7530 start.go:139] virtualization:  
	I0910 17:29:10.093138    7530 out.go:97] [download-only-933311] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0910 17:29:10.093358    7530 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:10.093509    7530 notify.go:220] Checking for updates...
	I0910 17:29:10.096332    7530 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:10.098883    7530 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:10.100997    7530 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:29:10.103560    7530 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	I0910 17:29:10.105857    7530 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0910 17:29:10.110444    7530 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:29:10.111026    7530 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:10.137334    7530 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 17:29:10.137453    7530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:10.459938    7530 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 17:29:10.449826286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:10.460051    7530 docker.go:318] overlay module found
	I0910 17:29:10.462480    7530 out.go:97] Using the docker driver based on user configuration
	I0910 17:29:10.462512    7530 start.go:297] selected driver: docker
	I0910 17:29:10.462520    7530 start.go:901] validating driver "docker" against <nil>
	I0910 17:29:10.462627    7530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:10.525237    7530 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 17:29:10.516038501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:10.525421    7530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:10.525717    7530 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0910 17:29:10.525905    7530 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:29:10.528333    7530 out.go:169] Using Docker driver with root privileges
	I0910 17:29:10.530584    7530 cni.go:84] Creating CNI manager for ""
	I0910 17:29:10.530624    7530 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 17:29:10.530707    7530 start.go:340] cluster config:
	{Name:download-only-933311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-933311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:10.533192    7530 out.go:97] Starting "download-only-933311" primary control-plane node in "download-only-933311" cluster
	I0910 17:29:10.533232    7530 cache.go:121] Beginning downloading kic base image for docker with docker
	I0910 17:29:10.535109    7530 out.go:97] Pulling base image v0.0.45-1725963390-19606 ...
	I0910 17:29:10.535146    7530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:29:10.535298    7530 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 17:29:10.551906    7530 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 17:29:10.552140    7530 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 17:29:10.552274    7530 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 17:29:10.618595    7530 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 17:29:10.618626    7530 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:10.618803    7530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:29:10.621147    7530 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 17:29:10.621179    7530 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 17:29:10.716473    7530 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0910 17:29:17.146204    7530 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 17:29:17.146307    7530 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 17:29:18.147120    7530 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0910 17:29:18.147488    7530 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/download-only-933311/config.json ...
	I0910 17:29:18.147524    7530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/download-only-933311/config.json: {Name:mk02b241c5dd03f12ecb1f18da5d5d767e6e6504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:18.147727    7530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:29:18.147936    7530 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19598-2209/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-933311 host does not exist
	  To start a cluster, run: "minikube start -p download-only-933311"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-933311
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-643138 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-643138 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.047102402s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-643138
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-643138: exit status 85 (82.819194ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-933311 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-933311        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-933311        | download-only-933311 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only        | download-only-643138 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-643138        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:28.546583    7731 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:28.546800    7731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:28.546826    7731 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:28.546849    7731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:28.547152    7731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:29:28.547636    7731 out.go:352] Setting JSON to true
	I0910 17:29:28.548475    7731 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":716,"bootTime":1725988653,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0910 17:29:28.548571    7731 start.go:139] virtualization:  
	I0910 17:29:28.551268    7731 out.go:97] [download-only-643138] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 17:29:28.551455    7731 notify.go:220] Checking for updates...
	I0910 17:29:28.553762    7731 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:28.555835    7731 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:28.557724    7731 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:29:28.559755    7731 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	I0910 17:29:28.561352    7731 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0910 17:29:28.564811    7731 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:29:28.565137    7731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:28.592553    7731 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 17:29:28.592669    7731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:28.653348    7731 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:28.643120341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:28.653463    7731 docker.go:318] overlay module found
	I0910 17:29:28.655544    7731 out.go:97] Using the docker driver based on user configuration
	I0910 17:29:28.655572    7731 start.go:297] selected driver: docker
	I0910 17:29:28.655579    7731 start.go:901] validating driver "docker" against <nil>
	I0910 17:29:28.655685    7731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:29:28.715788    7731 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:28.706851728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:29:28.715991    7731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:28.716258    7731 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0910 17:29:28.716411    7731 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:29:28.718784    7731 out.go:169] Using Docker driver with root privileges
	I0910 17:29:28.720700    7731 cni.go:84] Creating CNI manager for ""
	I0910 17:29:28.720733    7731 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:29:28.720753    7731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:28.720840    7731 start.go:340] cluster config:
	{Name:download-only-643138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-643138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:28.722964    7731 out.go:97] Starting "download-only-643138" primary control-plane node in "download-only-643138" cluster
	I0910 17:29:28.722989    7731 cache.go:121] Beginning downloading kic base image for docker with docker
	I0910 17:29:28.725109    7731 out.go:97] Pulling base image v0.0.45-1725963390-19606 ...
	I0910 17:29:28.725139    7731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:28.725314    7731 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
	I0910 17:29:28.743185    7731 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
	I0910 17:29:28.743345    7731 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
	I0910 17:29:28.743366    7731 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory, skipping pull
	I0910 17:29:28.743371    7731 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 exists in cache, skipping pull
	I0910 17:29:28.743378    7731 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 as a tarball
	I0910 17:29:28.861798    7731 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 17:29:28.861840    7731 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:28.862008    7731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:28.864107    7731 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0910 17:29:28.864139    7731 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 17:29:28.973872    7731 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0910 17:29:33.106113    7731 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 17:29:33.106230    7731 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-643138 host does not exist
	  To start a cluster, run: "minikube start -p download-only-643138"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-643138
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-558808 --alsologtostderr --binary-mirror http://127.0.0.1:38421 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-558808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-558808
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (60.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-905016 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-905016 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (57.976381604s)
helpers_test.go:175: Cleaning up "offline-docker-905016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-905016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-905016: (2.0946424s)
--- PASS: TestOffline (60.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-018527
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-018527: exit status 85 (62.42214ms)

                                                
                                                
-- stdout --
	* Profile "addons-018527" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018527"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-018527
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-018527: exit status 85 (67.838628ms)

                                                
                                                
-- stdout --
	* Profile "addons-018527" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018527"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-018527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-018527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.195338536s)
--- PASS: TestAddons/Setup (222.20s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 45.719529ms
addons_test.go:905: volcano-admission stabilized in 45.769366ms
addons_test.go:897: volcano-scheduler stabilized in 45.795311ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-czmpw" [29e19d56-8bd4-454d-973c-a116896262a2] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004402833s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fbn98" [e9dae119-23f0-469c-a05d-539432d6c905] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003536845s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zgk54" [37e50185-714d-4e81-9cf4-2f07e398aea0] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004057034s
addons_test.go:932: (dbg) Run:  kubectl --context addons-018527 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-018527 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-018527 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b176da90-b746-410d-b73d-6a830959d447] Pending
helpers_test.go:344: "test-job-nginx-0" [b176da90-b746-410d-b73d-6a830959d447] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b176da90-b746-410d-b73d-6a830959d447] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003444946s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable volcano --alsologtostderr -v=1: (10.608633173s)
--- PASS: TestAddons/serial/Volcano (41.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-018527 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-018527 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-018527 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-018527 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-018527 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [15b8b8a4-87f0-478d-b0c0-2d9fe6bdbc9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [15b8b8a4-87f0-478d-b0c0-2d9fe6bdbc9a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003779748s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-018527 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable ingress-dns --alsologtostderr -v=1: (1.34234021s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable ingress --alsologtostderr -v=1: (7.945979676s)
--- PASS: TestAddons/parallel/Ingress (20.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qzj2c" [d254e894-5053-48de-8b53-ba82389fc06c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005487791s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-018527
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-018527: (5.846945467s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.867703ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-m4w8v" [c99d7e90-85cb-445e-9f15-c2a13cc75a7a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003903353s
addons_test.go:417: (dbg) Run:  kubectl --context addons-018527 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.12174ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-018527 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-018527 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8bdc4224-36c5-42eb-8cb3-91a29efb129f] Pending
helpers_test.go:344: "task-pv-pod" [8bdc4224-36c5-42eb-8cb3-91a29efb129f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8bdc4224-36c5-42eb-8cb3-91a29efb129f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003727649s
addons_test.go:590: (dbg) Run:  kubectl --context addons-018527 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-018527 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-018527 delete pod task-pv-pod: (1.444307923s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-018527 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-018527 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-018527 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8ba56c1d-2ace-4a84-852d-7163ee72c261] Pending
helpers_test.go:344: "task-pv-pod-restore" [8ba56c1d-2ace-4a84-852d-7163ee72c261] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8ba56c1d-2ace-4a84-852d-7163ee72c261] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003753564s
addons_test.go:632: (dbg) Run:  kubectl --context addons-018527 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-018527 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-018527 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.680310736s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-018527 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cqln6" [9d9027f1-b73e-43e1-8fce-c975e221fabd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cqln6" [9d9027f1-b73e-43e1-8fce-c975e221fabd] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004147854s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable headlamp --alsologtostderr -v=1: (5.782772298s)
--- PASS: TestAddons/parallel/Headlamp (16.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-5jq7w" [8aeb9639-48bd-4a96-b522-46b016572e7b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003827961s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-018527
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-018527 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-018527 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7d55ae57-f821-4b3c-a8f4-0ed0b2ad07be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7d55ae57-f821-4b3c-a8f4-0ed0b2ad07be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7d55ae57-f821-4b3c-a8f4-0ed0b2ad07be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005493172s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-018527 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 ssh "cat /opt/local-path-provisioner/pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-018527 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-018527 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.483270775s)
--- PASS: TestAddons/parallel/LocalPath (53.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nzqkz" [8e4852cb-f95d-48ef-a74c-8da89946c2d5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003917462s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-018527
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2z6kf" [e99987b2-2a97-4e08-a7eb-7875d5b9cbb2] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00418003s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-018527 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 addons disable yakd --alsologtostderr -v=1: (5.662557289s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-018527
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-018527: (10.901751655s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-018527
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-018527
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-018527
--- PASS: TestAddons/StoppedEnableDisable (11.21s)

                                                
                                    
x
+
TestCertOptions (45.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-453296 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0910 18:28:18.798944    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-453296 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.725318498s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-453296 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-453296 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-453296 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-453296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-453296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-453296: (2.805796733s)
--- PASS: TestCertOptions (45.31s)

                                                
                                    
x
+
TestCertExpiration (247.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-438423 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-438423 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (39.52873509s)
E0910 18:29:28.083458    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:29:55.792751    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-438423 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-438423 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (25.085964679s)
helpers_test.go:175: Cleaning up "cert-expiration-438423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-438423
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-438423: (2.596392493s)
--- PASS: TestCertExpiration (247.21s)

                                                
                                    
x
+
TestDockerFlags (49.76s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-820550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0910 18:25:50.026025    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-820550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.266810746s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-820550 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-820550 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-820550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-820550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-820550: (2.631768011s)
--- PASS: TestDockerFlags (49.76s)

                                                
                                    
x
+
TestForceSystemdFlag (49.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-319380 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-319380 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.774012097s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-319380 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-319380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-319380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-319380: (2.516394169s)
--- PASS: TestForceSystemdFlag (49.88s)

                                                
                                    
x
+
TestForceSystemdEnv (39.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-422204 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0910 18:27:11.950525    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-422204 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.437020674s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-422204 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-422204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-422204
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-422204: (2.228911674s)
--- PASS: TestForceSystemdEnv (39.12s)

                                                
                                    
x
+
TestErrorSpam/setup (35.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-910058 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-910058 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-910058 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-910058 --driver=docker  --container-runtime=docker: (35.482376537s)
--- PASS: TestErrorSpam/setup (35.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 stop: (1.919011752s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-910058 --log_dir /tmp/nospam-910058 stop
--- PASS: TestErrorSpam/stop (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19598-2209/.minikube/files/etc/test/nested/copy/7525/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-613813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m13.315339224s)
--- PASS: TestFunctional/serial/StartWithProxy (73.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-613813 --alsologtostderr -v=8: (37.385722622s)
functional_test.go:663: soft start took 37.391452424s for "functional-613813" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.24s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-613813 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:3.1: (1.217577046s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:3.3: (1.068890995s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 cache add registry.k8s.io/pause:latest: (1.059461155s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-613813 /tmp/TestFunctionalserialCacheCmdcacheadd_local1502318116/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache add minikube-local-cache-test:functional-613813
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache delete minikube-local-cache-test:functional-613813
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-613813
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (325.624731ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 kubectl -- --context functional-613813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-613813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-613813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.100610645s)
functional_test.go:761: restart took 38.100740753s for "functional-613813" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-613813 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 logs: (1.213962228s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 logs --file /tmp/TestFunctionalserialLogsFileCmd3182272955/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 logs --file /tmp/TestFunctionalserialLogsFileCmd3182272955/001/logs.txt: (1.283754938s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-613813 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-613813
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-613813: exit status 115 (584.171643ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30881 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-613813 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-613813 delete -f testdata/invalidsvc.yaml: (1.043741055s)
--- PASS: TestFunctional/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 config get cpus: exit status 14 (93.164424ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 config get cpus: exit status 14 (95.921235ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-613813 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-613813 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49148: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-613813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (178.624771ms)

                                                
                                                
-- stdout --
	* [functional-613813] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:48:32.550266   48835 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:48:32.550472   48835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:48:32.550481   48835 out.go:358] Setting ErrFile to fd 2...
	I0910 17:48:32.550487   48835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:48:32.551173   48835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:48:32.551694   48835 out.go:352] Setting JSON to false
	I0910 17:48:32.552766   48835 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1860,"bootTime":1725988653,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0910 17:48:32.552885   48835 start.go:139] virtualization:  
	I0910 17:48:32.555308   48835 out.go:177] * [functional-613813] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0910 17:48:32.557628   48835 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:48:32.557825   48835 notify.go:220] Checking for updates...
	I0910 17:48:32.561428   48835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:48:32.563501   48835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:48:32.565456   48835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	I0910 17:48:32.567112   48835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 17:48:32.568960   48835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:48:32.571537   48835 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:48:32.572110   48835 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:48:32.602693   48835 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 17:48:32.602870   48835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:48:32.668891   48835 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 17:48:32.659001197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:48:32.669004   48835 docker.go:318] overlay module found
	I0910 17:48:32.671102   48835 out.go:177] * Using the docker driver based on existing profile
	I0910 17:48:32.672873   48835 start.go:297] selected driver: docker
	I0910 17:48:32.672890   48835 start.go:901] validating driver "docker" against &{Name:functional-613813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-613813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:48:32.672990   48835 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:48:32.675329   48835 out.go:201] 
	W0910 17:48:32.677117   48835 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 17:48:32.678981   48835 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-613813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-613813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.561472ms)

                                                
                                                
-- stdout --
	* [functional-613813] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:48:32.373840   48789 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:48:32.374030   48789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:48:32.374042   48789 out.go:358] Setting ErrFile to fd 2...
	I0910 17:48:32.374047   48789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:48:32.374427   48789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:48:32.374804   48789 out.go:352] Setting JSON to false
	I0910 17:48:32.375879   48789 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1860,"bootTime":1725988653,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0910 17:48:32.375979   48789 start.go:139] virtualization:  
	I0910 17:48:32.379369   48789 out.go:177] * [functional-613813] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0910 17:48:32.382147   48789 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:48:32.382382   48789 notify.go:220] Checking for updates...
	I0910 17:48:32.385862   48789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:48:32.388026   48789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	I0910 17:48:32.390005   48789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	I0910 17:48:32.391943   48789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0910 17:48:32.393749   48789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:48:32.396289   48789 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:48:32.397756   48789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:48:32.426005   48789 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0910 17:48:32.426120   48789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:48:32.489655   48789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-10 17:48:32.477232129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:48:32.489771   48789 docker.go:318] overlay module found
	I0910 17:48:32.491931   48789 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0910 17:48:32.493762   48789 start.go:297] selected driver: docker
	I0910 17:48:32.493780   48789 start.go:901] validating driver "docker" against &{Name:functional-613813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-613813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:48:32.493940   48789 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:48:32.496435   48789 out.go:201] 
	W0910 17:48:32.498494   48789 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 17:48:32.500345   48789 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-613813 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-613813 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h8kq5" [033f2017-af12-480f-b972-ec0031eae907] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-h8kq5" [033f2017-af12-480f-b972-ec0031eae907] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0055558s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31730
functional_test.go:1675: http://192.168.49.2:31730: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-h8kq5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31730
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ea69cad8-553e-4023-9821-f7fa1a05890e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.044301407s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-613813 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-613813 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-613813 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-613813 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-613813 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-613813 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-613813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [93e0b128-ea01-493b-9ca2-b88fe1ecff9d] Pending
helpers_test.go:344: "sp-pod" [93e0b128-ea01-493b-9ca2-b88fe1ecff9d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [93e0b128-ea01-493b-9ca2-b88fe1ecff9d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003816511s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-613813 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-613813 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-613813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9bb9be74-b69d-4242-8b43-0376bba8e8ce] Pending
E0910 17:48:23.932013    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [9bb9be74-b69d-4242-8b43-0376bba8e8ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9bb9be74-b69d-4242-8b43-0376bba8e8ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006109681s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-613813 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh -n functional-613813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cp functional-613813:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd29282734/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh -n functional-613813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh -n functional-613813 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7525/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /etc/test/nested/copy/7525/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7525.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /etc/ssl/certs/7525.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7525.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /usr/share/ca-certificates/7525.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /etc/ssl/certs/75252.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /usr/share/ca-certificates/75252.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-613813 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo systemctl is-active crio"
2024/09/10 17:48:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh "sudo systemctl is-active crio": exit status 1 (289.757888ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46114: os: process already finished
helpers_test.go:508: unable to kill pid 45949: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-613813 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [657e9615-5024-44ad-ae5b-b01e2a3659a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [657e9615-5024-44ad-ae5b-b01e2a3659a2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00518765s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-613813 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.109.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-613813 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-613813 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-613813 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-tsx64" [76066eaa-13d0-4fe6-ba41-848ace1e896f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0910 17:48:18.800054    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:18.807174    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:18.818559    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:18.839931    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:18.881499    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:18.962935    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:19.124492    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:19.446174    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-tsx64" [76066eaa-13d0-4fe6-ba41-848ace1e896f] Running
E0910 17:48:20.088361    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:21.370231    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003311855s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service list -o json
functional_test.go:1494: Took "498.28437ms" to run "out/minikube-linux-arm64 -p functional-613813 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30250
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30250
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "382.59259ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "68.487564ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E0910 17:48:29.054029    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "365.286531ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "57.520996ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdany-port1519427565/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725990509271433240" to /tmp/TestFunctionalparallelMountCmdany-port1519427565/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725990509271433240" to /tmp/TestFunctionalparallelMountCmdany-port1519427565/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725990509271433240" to /tmp/TestFunctionalparallelMountCmdany-port1519427565/001/test-1725990509271433240
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.445917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 test-1725990509271433240
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh cat /mount-9p/test-1725990509271433240
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-613813 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0bb92527-9ea2-4523-b3dc-4ac19c308cdf] Pending
helpers_test.go:344: "busybox-mount" [0bb92527-9ea2-4523-b3dc-4ac19c308cdf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0bb92527-9ea2-4523-b3dc-4ac19c308cdf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0bb92527-9ea2-4523-b3dc-4ac19c308cdf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004120096s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-613813 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdany-port1519427565/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdspecific-port2645561338/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh -- ls -la /mount-9p
E0910 17:48:39.296319    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdspecific-port2645561338/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh "sudo umount -f /mount-9p": exit status 1 (323.175013ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-613813 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdspecific-port2645561338/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T" /mount1: exit status 1 (930.350446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-613813 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-613813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4251612210/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 version -o=json --components: (1.126331258s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-613813 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-613813
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-613813
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-613813 image ls --format short --alsologtostderr:
I0910 17:48:50.132036   51935 out.go:345] Setting OutFile to fd 1 ...
I0910 17:48:50.132223   51935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.132230   51935 out.go:358] Setting ErrFile to fd 2...
I0910 17:48:50.132235   51935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.132510   51935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:48:50.133252   51935 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.133376   51935 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.133854   51935 cli_runner.go:164] Run: docker container inspect functional-613813 --format={{.State.Status}}
I0910 17:48:50.156112   51935 ssh_runner.go:195] Run: systemctl --version
I0910 17:48:50.156183   51935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-613813
I0910 17:48:50.182609   51935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/functional-613813/id_rsa Username:docker}
I0910 17:48:50.271049   51935 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-613813 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/minikube-local-cache-test | functional-613813 | 15818eea1eaae | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| docker.io/kicbase/echo-server               | functional-613813 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-613813 image ls --format table --alsologtostderr:
I0910 17:48:50.421374   52008 out.go:345] Setting OutFile to fd 1 ...
I0910 17:48:50.421899   52008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.421941   52008 out.go:358] Setting ErrFile to fd 2...
I0910 17:48:50.421961   52008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.422256   52008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:48:50.423295   52008 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.423548   52008 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.424113   52008 cli_runner.go:164] Run: docker container inspect functional-613813 --format={{.State.Status}}
I0910 17:48:50.449489   52008 ssh_runner.go:195] Run: systemctl --version
I0910 17:48:50.449543   52008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-613813
I0910 17:48:50.470891   52008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/functional-613813/id_rsa Username:docker}
I0910 17:48:50.569731   52008 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-613813 image ls --format json --alsologtostderr:
[{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-613813"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"15818eea1eaae
0f8e37a62bda9d96e2b2bd57754741093dd4601fff210c980a8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-613813"],"size":"30"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDige
sts":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"si
ze":"85000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-613813 image ls --format json --alsologtostderr:
I0910 17:48:50.372278   51995 out.go:345] Setting OutFile to fd 1 ...
I0910 17:48:50.372408   51995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.372420   51995 out.go:358] Setting ErrFile to fd 2...
I0910 17:48:50.372427   51995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.372752   51995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:48:50.373539   51995 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.373711   51995 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.374286   51995 cli_runner.go:164] Run: docker container inspect functional-613813 --format={{.State.Status}}
I0910 17:48:50.407495   51995 ssh_runner.go:195] Run: systemctl --version
I0910 17:48:50.407549   51995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-613813
I0910 17:48:50.432370   51995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/functional-613813/id_rsa Username:docker}
I0910 17:48:50.523296   51995 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-613813 image ls --format yaml --alsologtostderr:
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 15818eea1eaae0f8e37a62bda9d96e2b2bd57754741093dd4601fff210c980a8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-613813
size: "30"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-613813
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-613813 image ls --format yaml --alsologtostderr:
I0910 17:48:50.141119   51936 out.go:345] Setting OutFile to fd 1 ...
I0910 17:48:50.141374   51936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.141404   51936 out.go:358] Setting ErrFile to fd 2...
I0910 17:48:50.141426   51936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.141745   51936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:48:50.142478   51936 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.142695   51936 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.143324   51936 cli_runner.go:164] Run: docker container inspect functional-613813 --format={{.State.Status}}
I0910 17:48:50.166086   51936 ssh_runner.go:195] Run: systemctl --version
I0910 17:48:50.166140   51936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-613813
I0910 17:48:50.195659   51936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/functional-613813/id_rsa Username:docker}
I0910 17:48:50.300699   51936 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-613813 ssh pgrep buildkitd: exit status 1 (288.63327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image build -t localhost/my-image:functional-613813 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 image build -t localhost/my-image:functional-613813 testdata/build --alsologtostderr: (2.656585482s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-613813 image build -t localhost/my-image:functional-613813 testdata/build --alsologtostderr:
I0910 17:48:50.887005   52126 out.go:345] Setting OutFile to fd 1 ...
I0910 17:48:50.887222   52126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.887236   52126 out.go:358] Setting ErrFile to fd 2...
I0910 17:48:50.887242   52126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:48:50.887526   52126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:48:50.888188   52126 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.888841   52126 config.go:182] Loaded profile config "functional-613813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:48:50.889361   52126 cli_runner.go:164] Run: docker container inspect functional-613813 --format={{.State.Status}}
I0910 17:48:50.906778   52126 ssh_runner.go:195] Run: systemctl --version
I0910 17:48:50.906839   52126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-613813
I0910 17:48:50.923488   52126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/functional-613813/id_rsa Username:docker}
I0910 17:48:51.022961   52126 build_images.go:161] Building image from path: /tmp/build.1534069952.tar
I0910 17:48:51.023038   52126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 17:48:51.050080   52126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1534069952.tar
I0910 17:48:51.053840   52126 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1534069952.tar: stat -c "%s %y" /var/lib/minikube/build/build.1534069952.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1534069952.tar': No such file or directory
I0910 17:48:51.053871   52126 ssh_runner.go:362] scp /tmp/build.1534069952.tar --> /var/lib/minikube/build/build.1534069952.tar (3072 bytes)
I0910 17:48:51.082507   52126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1534069952
I0910 17:48:51.092958   52126 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1534069952 -xf /var/lib/minikube/build/build.1534069952.tar
I0910 17:48:51.104316   52126 docker.go:360] Building image: /var/lib/minikube/build/build.1534069952
I0910 17:48:51.104402   52126 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-613813 /var/lib/minikube/build/build.1534069952
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c57bf262383e9f7d5593ef4a1dc1606e111e10ce7c81183deb496f10fe7aa74a done
#8 naming to localhost/my-image:functional-613813 done
#8 DONE 0.1s
I0910 17:48:53.470252   52126 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-613813 /var/lib/minikube/build/build.1534069952: (2.365821573s)
I0910 17:48:53.470320   52126 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1534069952
I0910 17:48:53.480012   52126 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1534069952.tar
I0910 17:48:53.489773   52126 build_images.go:217] Built localhost/my-image:functional-613813 from /tmp/build.1534069952.tar
I0910 17:48:53.489807   52126 build_images.go:133] succeeded building to: functional-613813
I0910 17:48:53.489817   52126 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-613813
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image load --daemon kicbase/echo-server:functional-613813 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-613813 image load --daemon kicbase/echo-server:functional-613813 --alsologtostderr: (1.00483739s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image load --daemon kicbase/echo-server:functional-613813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-613813 docker-env) && out/minikube-linux-arm64 status -p functional-613813"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-613813 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-613813
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image load --daemon kicbase/echo-server:functional-613813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image save kicbase/echo-server:functional-613813 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image rm kicbase/echo-server:functional-613813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-613813
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-613813 image save --daemon kicbase/echo-server:functional-613813 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-613813
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-613813
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-613813
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-613813
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-720209 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0910 17:48:59.778166    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:49:40.739868    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-720209 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m4.105628656s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (125.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (50.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- rollout status deployment/busybox
E0910 17:51:02.661495    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-720209 -- rollout status deployment/busybox: (4.698552139s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-hsqtz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-l24d8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-t4c5d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-hsqtz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-l24d8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-t4c5d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-hsqtz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-l24d8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-t4c5d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (50.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-hsqtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-hsqtz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-l24d8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-l24d8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-t4c5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-720209 -- exec busybox-7dff88458-t4c5d -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-720209 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-720209 -v=7 --alsologtostderr: (26.190100829s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr: (1.056758432s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-720209 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 status --output json -v=7 --alsologtostderr: (1.026964313s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp testdata/cp-test.txt ha-720209:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3358232410/001/cp-test_ha-720209.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209:/home/docker/cp-test.txt ha-720209-m02:/home/docker/cp-test_ha-720209_ha-720209-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test_ha-720209_ha-720209-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209:/home/docker/cp-test.txt ha-720209-m03:/home/docker/cp-test_ha-720209_ha-720209-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test_ha-720209_ha-720209-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209:/home/docker/cp-test.txt ha-720209-m04:/home/docker/cp-test_ha-720209_ha-720209-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test_ha-720209_ha-720209-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp testdata/cp-test.txt ha-720209-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3358232410/001/cp-test_ha-720209-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m02:/home/docker/cp-test.txt ha-720209:/home/docker/cp-test_ha-720209-m02_ha-720209.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test_ha-720209-m02_ha-720209.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m02:/home/docker/cp-test.txt ha-720209-m03:/home/docker/cp-test_ha-720209-m02_ha-720209-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test_ha-720209-m02_ha-720209-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m02:/home/docker/cp-test.txt ha-720209-m04:/home/docker/cp-test_ha-720209-m02_ha-720209-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test_ha-720209-m02_ha-720209-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp testdata/cp-test.txt ha-720209-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3358232410/001/cp-test_ha-720209-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m03:/home/docker/cp-test.txt ha-720209:/home/docker/cp-test_ha-720209-m03_ha-720209.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test_ha-720209-m03_ha-720209.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m03:/home/docker/cp-test.txt ha-720209-m02:/home/docker/cp-test_ha-720209-m03_ha-720209-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test_ha-720209-m03_ha-720209-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m03:/home/docker/cp-test.txt ha-720209-m04:/home/docker/cp-test_ha-720209-m03_ha-720209-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test_ha-720209-m03_ha-720209-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp testdata/cp-test.txt ha-720209-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3358232410/001/cp-test_ha-720209-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m04:/home/docker/cp-test.txt ha-720209:/home/docker/cp-test_ha-720209-m04_ha-720209.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209 "sudo cat /home/docker/cp-test_ha-720209-m04_ha-720209.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m04:/home/docker/cp-test.txt ha-720209-m02:/home/docker/cp-test_ha-720209-m04_ha-720209-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m02 "sudo cat /home/docker/cp-test_ha-720209-m04_ha-720209-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 cp ha-720209-m04:/home/docker/cp-test.txt ha-720209-m03:/home/docker/cp-test_ha-720209-m04_ha-720209-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 ssh -n ha-720209-m03 "sudo cat /home/docker/cp-test_ha-720209-m04_ha-720209-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 node stop m02 -v=7 --alsologtostderr: (11.252335747s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr: exit status 7 (763.548667ms)

                                                
                                                
-- stdout --
	ha-720209
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-720209-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720209-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-720209-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:52:53.205044   74859 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:52:53.205255   74859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:52:53.205282   74859 out.go:358] Setting ErrFile to fd 2...
	I0910 17:52:53.205300   74859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:52:53.205719   74859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:52:53.205972   74859 out.go:352] Setting JSON to false
	I0910 17:52:53.206039   74859 mustload.go:65] Loading cluster: ha-720209
	I0910 17:52:53.206073   74859 notify.go:220] Checking for updates...
	I0910 17:52:53.206989   74859 config.go:182] Loaded profile config "ha-720209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:52:53.207049   74859 status.go:255] checking status of ha-720209 ...
	I0910 17:52:53.207602   74859 cli_runner.go:164] Run: docker container inspect ha-720209 --format={{.State.Status}}
	I0910 17:52:53.231705   74859 status.go:330] ha-720209 host status = "Running" (err=<nil>)
	I0910 17:52:53.231729   74859 host.go:66] Checking if "ha-720209" exists ...
	I0910 17:52:53.232021   74859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720209
	I0910 17:52:53.268438   74859 host.go:66] Checking if "ha-720209" exists ...
	I0910 17:52:53.268830   74859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:52:53.268883   74859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720209
	I0910 17:52:53.290040   74859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/ha-720209/id_rsa Username:docker}
	I0910 17:52:53.384064   74859 ssh_runner.go:195] Run: systemctl --version
	I0910 17:52:53.388542   74859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:53.403009   74859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 17:52:53.466008   74859 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-10 17:52:53.454924913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 17:52:53.466751   74859 kubeconfig.go:125] found "ha-720209" server: "https://192.168.49.254:8443"
	I0910 17:52:53.466782   74859 api_server.go:166] Checking apiserver status ...
	I0910 17:52:53.466829   74859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:52:53.479699   74859 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2364/cgroup
	I0910 17:52:53.491325   74859 api_server.go:182] apiserver freezer: "11:freezer:/docker/b9bd78f0c977d22f352746d0abc3464922bac46dd2e2042b8be32015f74f8615/kubepods/burstable/pod20e13a357bd4ba12ee5a52be68fe2ddf/47a45dd4b85fa42b7d3ab96356ea17379b8bfb2699d0de10b1bce7240e3e1137"
	I0910 17:52:53.491396   74859 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9bd78f0c977d22f352746d0abc3464922bac46dd2e2042b8be32015f74f8615/kubepods/burstable/pod20e13a357bd4ba12ee5a52be68fe2ddf/47a45dd4b85fa42b7d3ab96356ea17379b8bfb2699d0de10b1bce7240e3e1137/freezer.state
	I0910 17:52:53.500816   74859 api_server.go:204] freezer state: "THAWED"
	I0910 17:52:53.500858   74859 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0910 17:52:53.508903   74859 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0910 17:52:53.508977   74859 status.go:422] ha-720209 apiserver status = Running (err=<nil>)
	I0910 17:52:53.509025   74859 status.go:257] ha-720209 status: &{Name:ha-720209 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:52:53.509069   74859 status.go:255] checking status of ha-720209-m02 ...
	I0910 17:52:53.509476   74859 cli_runner.go:164] Run: docker container inspect ha-720209-m02 --format={{.State.Status}}
	I0910 17:52:53.527918   74859 status.go:330] ha-720209-m02 host status = "Stopped" (err=<nil>)
	I0910 17:52:53.527941   74859 status.go:343] host is not running, skipping remaining checks
	I0910 17:52:53.527949   74859 status.go:257] ha-720209-m02 status: &{Name:ha-720209-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:52:53.527983   74859 status.go:255] checking status of ha-720209-m03 ...
	I0910 17:52:53.528285   74859 cli_runner.go:164] Run: docker container inspect ha-720209-m03 --format={{.State.Status}}
	I0910 17:52:53.557952   74859 status.go:330] ha-720209-m03 host status = "Running" (err=<nil>)
	I0910 17:52:53.557976   74859 host.go:66] Checking if "ha-720209-m03" exists ...
	I0910 17:52:53.558289   74859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720209-m03
	I0910 17:52:53.576245   74859 host.go:66] Checking if "ha-720209-m03" exists ...
	I0910 17:52:53.576690   74859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:52:53.576774   74859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720209-m03
	I0910 17:52:53.599183   74859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/ha-720209-m03/id_rsa Username:docker}
	I0910 17:52:53.696076   74859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:53.709068   74859 kubeconfig.go:125] found "ha-720209" server: "https://192.168.49.254:8443"
	I0910 17:52:53.709145   74859 api_server.go:166] Checking apiserver status ...
	I0910 17:52:53.709228   74859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:52:53.721660   74859 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2254/cgroup
	I0910 17:52:53.731512   74859 api_server.go:182] apiserver freezer: "11:freezer:/docker/2363ce3e342db49f09ce468c3c9cdaed8af1eaefaa302d846d2a758e447745a8/kubepods/burstable/pod3d399146c8302d1bff453fc925d3ea55/0fc34061c59987ca9aa6a0fc354ab4dbb5e6b1d52b560232344866d427b43b1b"
	I0910 17:52:53.731597   74859 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2363ce3e342db49f09ce468c3c9cdaed8af1eaefaa302d846d2a758e447745a8/kubepods/burstable/pod3d399146c8302d1bff453fc925d3ea55/0fc34061c59987ca9aa6a0fc354ab4dbb5e6b1d52b560232344866d427b43b1b/freezer.state
	I0910 17:52:53.740887   74859 api_server.go:204] freezer state: "THAWED"
	I0910 17:52:53.740917   74859 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0910 17:52:53.750238   74859 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0910 17:52:53.750290   74859 status.go:422] ha-720209-m03 apiserver status = Running (err=<nil>)
	I0910 17:52:53.750302   74859 status.go:257] ha-720209-m03 status: &{Name:ha-720209-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:52:53.750323   74859 status.go:255] checking status of ha-720209-m04 ...
	I0910 17:52:53.750748   74859 cli_runner.go:164] Run: docker container inspect ha-720209-m04 --format={{.State.Status}}
	I0910 17:52:53.770979   74859 status.go:330] ha-720209-m04 host status = "Running" (err=<nil>)
	I0910 17:52:53.771014   74859 host.go:66] Checking if "ha-720209-m04" exists ...
	I0910 17:52:53.771317   74859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720209-m04
	I0910 17:52:53.789398   74859 host.go:66] Checking if "ha-720209-m04" exists ...
	I0910 17:52:53.789824   74859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:52:53.789881   74859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720209-m04
	I0910 17:52:53.808197   74859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/ha-720209-m04/id_rsa Username:docker}
	I0910 17:52:53.896538   74859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:53.909074   74859 status.go:257] ha-720209-m04 status: &{Name:ha-720209-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (73.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 node start m02 -v=7 --alsologtostderr
E0910 17:52:55.052731    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.059265    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.070630    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.095305    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.136991    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.219230    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.380813    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:55.702442    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:56.344262    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:57.626428    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:00.188235    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:05.309558    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:15.551680    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:18.799182    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:36.047757    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:46.503514    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 node start m02 -v=7 --alsologtostderr: (1m11.661164122s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr: (1.303021166s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (73.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.160672459s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (242.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-720209 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-720209 -v=7 --alsologtostderr
E0910 17:54:17.016920    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-720209 -v=7 --alsologtostderr: (34.250236623s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-720209 --wait=true -v=7 --alsologtostderr
E0910 17:55:38.938973    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:57:55.052221    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-720209 --wait=true -v=7 --alsologtostderr: (3m28.526096528s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-720209
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (242.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 node delete m03 -v=7 --alsologtostderr
E0910 17:58:18.798886    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:58:22.781171    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 node delete m03 -v=7 --alsologtostderr: (10.33628332s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 stop -v=7 --alsologtostderr: (32.867629675s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr: exit status 7 (106.637367ms)

                                                
                                                
-- stdout --
	ha-720209
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720209-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720209-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:58:59.548765  102810 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:58:59.548890  102810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:58:59.548898  102810 out.go:358] Setting ErrFile to fd 2...
	I0910 17:58:59.548903  102810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:58:59.549225  102810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 17:58:59.549462  102810 out.go:352] Setting JSON to false
	I0910 17:58:59.549492  102810 mustload.go:65] Loading cluster: ha-720209
	I0910 17:58:59.549614  102810 notify.go:220] Checking for updates...
	I0910 17:58:59.549913  102810 config.go:182] Loaded profile config "ha-720209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:58:59.549932  102810 status.go:255] checking status of ha-720209 ...
	I0910 17:58:59.550709  102810 cli_runner.go:164] Run: docker container inspect ha-720209 --format={{.State.Status}}
	I0910 17:58:59.570297  102810 status.go:330] ha-720209 host status = "Stopped" (err=<nil>)
	I0910 17:58:59.570323  102810 status.go:343] host is not running, skipping remaining checks
	I0910 17:58:59.570350  102810 status.go:257] ha-720209 status: &{Name:ha-720209 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:58:59.570375  102810 status.go:255] checking status of ha-720209-m02 ...
	I0910 17:58:59.570716  102810 cli_runner.go:164] Run: docker container inspect ha-720209-m02 --format={{.State.Status}}
	I0910 17:58:59.594834  102810 status.go:330] ha-720209-m02 host status = "Stopped" (err=<nil>)
	I0910 17:58:59.594852  102810 status.go:343] host is not running, skipping remaining checks
	I0910 17:58:59.594859  102810 status.go:257] ha-720209-m02 status: &{Name:ha-720209-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:58:59.594877  102810 status.go:255] checking status of ha-720209-m04 ...
	I0910 17:58:59.595181  102810 cli_runner.go:164] Run: docker container inspect ha-720209-m04 --format={{.State.Status}}
	I0910 17:58:59.611709  102810 status.go:330] ha-720209-m04 host status = "Stopped" (err=<nil>)
	I0910 17:58:59.611730  102810 status.go:343] host is not running, skipping remaining checks
	I0910 17:58:59.611738  102810 status.go:257] ha-720209-m04 status: &{Name:ha-720209-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-720209 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-720209 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.383747451s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-720209 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-720209 --control-plane -v=7 --alsologtostderr: (45.49081011s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-720209 status -v=7 --alsologtostderr: (1.05315935s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-172581 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-172581 --driver=docker  --container-runtime=docker: (30.915498267s)
--- PASS: TestImageBuild/serial/Setup (30.92s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-172581
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-172581: (1.878628278s)
--- PASS: TestImageBuild/serial/NormalBuild (1.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-172581
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-172581: (1.027687228s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-172581
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-172581
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-584764 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-584764 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.11266474s)
--- PASS: TestJSONOutput/start/Command (43.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-584764 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-584764 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-584764 --output=json --user=testUser
E0910 18:02:55.065086    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-584764 --output=json --user=testUser: (10.904663014s)
--- PASS: TestJSONOutput/stop/Command (10.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-180587 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-180587 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.305968ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2af2a022-7e10-49a0-b2d3-2285f006e078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-180587] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"77a9e42b-dfc1-47dd-8225-4700de2444c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"0b814b9b-2693-4860-948c-7a0dd4fac26b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"778f7efb-2f30-4bdd-883c-365f1356981a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig"}}
	{"specversion":"1.0","id":"ecc41ddc-1494-4221-8bf4-8ed5889a9119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube"}}
	{"specversion":"1.0","id":"5f583128-dc18-4078-8f4b-2823cdb09b7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"aa038717-e407-45f2-a8f1-b0163d783d73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cb3e9326-aa6f-4be7-96e0-52ff18dbabaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-180587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-180587
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-366921 --network=
E0910 18:03:18.798997    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-366921 --network=: (31.845121098s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-366921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-366921
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-366921: (2.115085304s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-477374 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-477374 --network=bridge: (34.227358099s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-477374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-477374
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-477374: (1.999243436s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.25s)

                                                
                                    
x
+
TestKicExistingNetwork (33.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-207019 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-207019 --network=existing-network: (31.21256409s)
helpers_test.go:175: Cleaning up "existing-network-207019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-207019
E0910 18:04:41.864881    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-207019: (2.011340959s)
--- PASS: TestKicExistingNetwork (33.40s)

                                                
                                    
x
+
TestKicCustomSubnet (35.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-270049 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-270049 --subnet=192.168.60.0/24: (33.473163922s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-270049 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-270049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-270049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-270049: (2.221805459s)
--- PASS: TestKicCustomSubnet (35.72s)

                                                
                                    
x
+
TestKicStaticIP (34.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-849289 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-849289 --static-ip=192.168.200.200: (31.722438492s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-849289 ip
helpers_test.go:175: Cleaning up "static-ip-849289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-849289
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-849289: (2.232093722s)
--- PASS: TestKicStaticIP (34.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-678979 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-678979 --driver=docker  --container-runtime=docker: (34.631865162s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-681695 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-681695 --driver=docker  --container-runtime=docker: (36.457495136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-678979
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-681695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-681695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-681695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-681695: (2.14460879s)
helpers_test.go:175: Cleaning up "first-678979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-678979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-678979: (2.090661699s)
--- PASS: TestMinikubeProfile (76.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-450833 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-450833 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.786241757s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-450833 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-463034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-463034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.930279737s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463034 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-450833 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-450833 --alsologtostderr -v=5: (1.46892142s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463034 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-463034
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-463034: (1.330227738s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-463034
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-463034: (7.804915386s)
--- PASS: TestMountStart/serial/RestartStopped (8.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463034 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-670680 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0910 18:07:55.052287    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:08:18.799131    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-670680 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.296916305s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-670680 -- rollout status deployment/busybox: (4.615818515s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0910 18:09:18.143285    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-227gz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-m5tpg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-227gz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-m5tpg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-227gz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-m5tpg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-227gz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-227gz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-m5tpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-670680 -- exec busybox-7dff88458-m5tpg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-670680 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-670680 -v 3 --alsologtostderr: (20.372937622s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-670680 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp testdata/cp-test.txt multinode-670680:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829118045/001/cp-test_multinode-670680.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680:/home/docker/cp-test.txt multinode-670680-m02:/home/docker/cp-test_multinode-670680_multinode-670680-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test_multinode-670680_multinode-670680-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680:/home/docker/cp-test.txt multinode-670680-m03:/home/docker/cp-test_multinode-670680_multinode-670680-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test_multinode-670680_multinode-670680-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp testdata/cp-test.txt multinode-670680-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829118045/001/cp-test_multinode-670680-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m02:/home/docker/cp-test.txt multinode-670680:/home/docker/cp-test_multinode-670680-m02_multinode-670680.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test_multinode-670680-m02_multinode-670680.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m02:/home/docker/cp-test.txt multinode-670680-m03:/home/docker/cp-test_multinode-670680-m02_multinode-670680-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test_multinode-670680-m02_multinode-670680-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp testdata/cp-test.txt multinode-670680-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829118045/001/cp-test_multinode-670680-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m03:/home/docker/cp-test.txt multinode-670680:/home/docker/cp-test_multinode-670680-m03_multinode-670680.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680 "sudo cat /home/docker/cp-test_multinode-670680-m03_multinode-670680.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 cp multinode-670680-m03:/home/docker/cp-test.txt multinode-670680-m02:/home/docker/cp-test_multinode-670680-m03_multinode-670680-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 ssh -n multinode-670680-m02 "sudo cat /home/docker/cp-test_multinode-670680-m03_multinode-670680-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-670680 node stop m03: (1.236001858s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-670680 status: exit status 7 (575.952085ms)

                                                
                                                
-- stdout --
	multinode-670680
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-670680-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-670680-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr: exit status 7 (519.204191ms)

                                                
                                                
-- stdout --
	multinode-670680
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-670680-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-670680-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:10:20.543203  177015 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:10:20.543798  177015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:10:20.543818  177015 out.go:358] Setting ErrFile to fd 2...
	I0910 18:10:20.543825  177015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:10:20.544095  177015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 18:10:20.544294  177015 out.go:352] Setting JSON to false
	I0910 18:10:20.544341  177015 mustload.go:65] Loading cluster: multinode-670680
	I0910 18:10:20.544410  177015 notify.go:220] Checking for updates...
	I0910 18:10:20.545777  177015 config.go:182] Loaded profile config "multinode-670680": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:10:20.545808  177015 status.go:255] checking status of multinode-670680 ...
	I0910 18:10:20.546397  177015 cli_runner.go:164] Run: docker container inspect multinode-670680 --format={{.State.Status}}
	I0910 18:10:20.567106  177015 status.go:330] multinode-670680 host status = "Running" (err=<nil>)
	I0910 18:10:20.567131  177015 host.go:66] Checking if "multinode-670680" exists ...
	I0910 18:10:20.567439  177015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-670680
	I0910 18:10:20.594186  177015 host.go:66] Checking if "multinode-670680" exists ...
	I0910 18:10:20.594548  177015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:10:20.594622  177015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-670680
	I0910 18:10:20.614554  177015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/multinode-670680/id_rsa Username:docker}
	I0910 18:10:20.707643  177015 ssh_runner.go:195] Run: systemctl --version
	I0910 18:10:20.714404  177015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:10:20.727200  177015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0910 18:10:20.784008  177015 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-10 18:10:20.774135421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0910 18:10:20.784612  177015 kubeconfig.go:125] found "multinode-670680" server: "https://192.168.67.2:8443"
	I0910 18:10:20.784642  177015 api_server.go:166] Checking apiserver status ...
	I0910 18:10:20.784692  177015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:10:20.796604  177015 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0910 18:10:20.807326  177015 api_server.go:182] apiserver freezer: "11:freezer:/docker/f25eaf4dd7880b2d5109d2b6e5ed8d45c74b5ea2b27df5369e74dc357eb6b3b5/kubepods/burstable/podf881e160f61e20f6726610704f8363c6/0cb79f654509da69465ac25bc8fa890c439113a64ab1685e680a6a2399ed7717"
	I0910 18:10:20.807398  177015 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f25eaf4dd7880b2d5109d2b6e5ed8d45c74b5ea2b27df5369e74dc357eb6b3b5/kubepods/burstable/podf881e160f61e20f6726610704f8363c6/0cb79f654509da69465ac25bc8fa890c439113a64ab1685e680a6a2399ed7717/freezer.state
	I0910 18:10:20.816545  177015 api_server.go:204] freezer state: "THAWED"
	I0910 18:10:20.816574  177015 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0910 18:10:20.824625  177015 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0910 18:10:20.824656  177015 status.go:422] multinode-670680 apiserver status = Running (err=<nil>)
	I0910 18:10:20.824667  177015 status.go:257] multinode-670680 status: &{Name:multinode-670680 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:10:20.824687  177015 status.go:255] checking status of multinode-670680-m02 ...
	I0910 18:10:20.824992  177015 cli_runner.go:164] Run: docker container inspect multinode-670680-m02 --format={{.State.Status}}
	I0910 18:10:20.850433  177015 status.go:330] multinode-670680-m02 host status = "Running" (err=<nil>)
	I0910 18:10:20.850458  177015 host.go:66] Checking if "multinode-670680-m02" exists ...
	I0910 18:10:20.850772  177015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-670680-m02
	I0910 18:10:20.868106  177015 host.go:66] Checking if "multinode-670680-m02" exists ...
	I0910 18:10:20.868418  177015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:10:20.868466  177015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-670680-m02
	I0910 18:10:20.886283  177015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/multinode-670680-m02/id_rsa Username:docker}
	I0910 18:10:20.975668  177015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:10:20.987680  177015 status.go:257] multinode-670680-m02 status: &{Name:multinode-670680-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:10:20.987716  177015 status.go:255] checking status of multinode-670680-m03 ...
	I0910 18:10:20.988020  177015 cli_runner.go:164] Run: docker container inspect multinode-670680-m03 --format={{.State.Status}}
	I0910 18:10:21.009339  177015 status.go:330] multinode-670680-m03 host status = "Stopped" (err=<nil>)
	I0910 18:10:21.009365  177015 status.go:343] host is not running, skipping remaining checks
	I0910 18:10:21.009374  177015 status.go:257] multinode-670680-m03 status: &{Name:multinode-670680-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-670680 node start m03 -v=7 --alsologtostderr: (10.193037295s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-670680
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-670680
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-670680: (22.719299445s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-670680 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-670680 --wait=true -v=8 --alsologtostderr: (1m15.806441498s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-670680
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-670680 node delete m03: (5.077136954s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-670680 stop: (21.439472467s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-670680 status: exit status 7 (90.911318ms)

                                                
                                                
-- stdout --
	multinode-670680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-670680-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr: exit status 7 (84.519486ms)

                                                
                                                
-- stdout --
	multinode-670680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-670680-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:12:37.947211  190574 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:12:37.947357  190574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:12:37.947368  190574 out.go:358] Setting ErrFile to fd 2...
	I0910 18:12:37.947374  190574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:12:37.947605  190574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
	I0910 18:12:37.947786  190574 out.go:352] Setting JSON to false
	I0910 18:12:37.947845  190574 mustload.go:65] Loading cluster: multinode-670680
	I0910 18:12:37.947932  190574 notify.go:220] Checking for updates...
	I0910 18:12:37.948871  190574 config.go:182] Loaded profile config "multinode-670680": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:12:37.948899  190574 status.go:255] checking status of multinode-670680 ...
	I0910 18:12:37.949395  190574 cli_runner.go:164] Run: docker container inspect multinode-670680 --format={{.State.Status}}
	I0910 18:12:37.966216  190574 status.go:330] multinode-670680 host status = "Stopped" (err=<nil>)
	I0910 18:12:37.966241  190574 status.go:343] host is not running, skipping remaining checks
	I0910 18:12:37.966249  190574 status.go:257] multinode-670680 status: &{Name:multinode-670680 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:12:37.966280  190574 status.go:255] checking status of multinode-670680-m02 ...
	I0910 18:12:37.966626  190574 cli_runner.go:164] Run: docker container inspect multinode-670680-m02 --format={{.State.Status}}
	I0910 18:12:37.987907  190574 status.go:330] multinode-670680-m02 host status = "Stopped" (err=<nil>)
	I0910 18:12:37.987931  190574 status.go:343] host is not running, skipping remaining checks
	I0910 18:12:37.987939  190574 status.go:257] multinode-670680-m02 status: &{Name:multinode-670680-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-670680 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0910 18:12:55.052283    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:13:18.798850    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-670680 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.637330307s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-670680 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-670680
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-670680-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-670680-m02 --driver=docker  --container-runtime=docker: exit status 14 (80.015108ms)

                                                
                                                
-- stdout --
	* [multinode-670680-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-670680-m02' is duplicated with machine name 'multinode-670680-m02' in profile 'multinode-670680'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-670680-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-670680-m03 --driver=docker  --container-runtime=docker: (32.918836359s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-670680
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-670680: exit status 80 (323.869465ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-670680 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-670680-m03 already exists in multinode-670680-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-670680-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-670680-m03: (2.083235809s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.45s)

                                                
                                    
x
+
TestPreload (102.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-231340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-231340 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m3.353608647s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-231340 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-231340 image pull gcr.io/k8s-minikube/busybox: (2.145869591s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-231340
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-231340: (10.855206054s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-231340 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-231340 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.779647713s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-231340 image list
helpers_test.go:175: Cleaning up "test-preload-231340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-231340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-231340: (2.255169986s)
--- PASS: TestPreload (102.78s)

                                                
                                    
x
+
TestScheduledStopUnix (104.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-592248 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-592248 --memory=2048 --driver=docker  --container-runtime=docker: (31.099400809s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-592248 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-592248 -n scheduled-stop-592248
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-592248 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-592248 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-592248 -n scheduled-stop-592248
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-592248
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-592248 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-592248
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-592248: exit status 7 (70.793461ms)

                                                
                                                
-- stdout --
	scheduled-stop-592248
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-592248 -n scheduled-stop-592248
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-592248 -n scheduled-stop-592248: exit status 7 (71.698149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-592248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-592248
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-592248: (1.660357777s)
--- PASS: TestScheduledStopUnix (104.33s)

                                                
                                    
x
+
TestSkaffold (120.23s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe234192283 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-447070 --memory=2600 --driver=docker  --container-runtime=docker
E0910 18:17:55.052774    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-447070 --memory=2600 --driver=docker  --container-runtime=docker: (33.043751016s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe234192283 run --minikube-profile skaffold-447070 --kube-context skaffold-447070 --status-check=true --port-forward=false --interactive=false
E0910 18:18:18.798731    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe234192283 run --minikube-profile skaffold-447070 --kube-context skaffold-447070 --status-check=true --port-forward=false --interactive=false: (1m11.545400773s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-77d599fd69-nhg74" [551ba9af-6bf8-4c56-8751-00b47b91d3da] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004012777s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7685cb9b94-ghk9x" [2fa0c475-5d96-4cdb-a1e0-d1d13e03e22a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004508138s
helpers_test.go:175: Cleaning up "skaffold-447070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-447070
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-447070: (3.031468622s)
--- PASS: TestSkaffold (120.23s)

                                                
                                    
x
+
TestInsufficientStorage (11.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-116776 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-116776 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.346242943s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4719f548-fbe7-4436-a17a-1e431f0faf2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-116776] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9089a3c1-2651-4702-b4c8-7061e18917c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"b7032063-8881-4967-b2c9-6f2d083d8c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f51addcb-cf79-4ac0-8f7d-06a0e8b69aac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig"}}
	{"specversion":"1.0","id":"2b7f2e95-4d5f-4264-a508-6d710d7dbbc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube"}}
	{"specversion":"1.0","id":"5fe9fd8d-a1ec-4b6c-89bc-623b5d242d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"58096d11-3dd1-4b02-8143-79c17c7d0709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2bd3723-e6bc-4a32-b33d-cbd45d90e627","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8b47b3df-b444-4d45-a89b-65e68064e036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"933073e2-58ef-45aa-aed3-e9f8c2c1f373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ba3e03c-c47a-4f5f-b9f6-5b4b5eba28f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0c1cb505-56e8-4ee4-b4a4-2365610bfd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-116776\" primary control-plane node in \"insufficient-storage-116776\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0876cded-4c85-44ad-b4ed-005d263df87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1725963390-19606 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"da1027a3-a0a9-4bc2-9728-e168c86428a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b052cad-f525-41f3-88cf-c8e9bb2924be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-116776 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-116776 --output=json --layout=cluster: exit status 7 (287.914003ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-116776","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-116776","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:19:51.761439  224468 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-116776" does not appear in /home/jenkins/minikube-integration/19598-2209/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-116776 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-116776 --output=json --layout=cluster: exit status 7 (304.516351ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-116776","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-116776","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:19:52.066454  224530 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-116776" does not appear in /home/jenkins/minikube-integration/19598-2209/kubeconfig
	E0910 18:19:52.077306  224530 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/insufficient-storage-116776/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-116776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-116776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-116776: (1.700768289s)
--- PASS: TestInsufficientStorage (11.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (107.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1812870150 start -p running-upgrade-986358 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1812870150 start -p running-upgrade-986358 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.708180271s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-986358 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0910 18:25:58.145117    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-986358 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.561070252s)
helpers_test.go:175: Cleaning up "running-upgrade-986358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-986358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-986358: (2.332991095s)
--- PASS: TestRunningBinaryUpgrade (107.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (225.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.88297456s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-230666
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-230666: (10.834451211s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-230666 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-230666 status --format={{.Host}}: exit status 7 (80.881302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0910 18:23:18.798803    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (2m5.535766567s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-230666 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (118.156407ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-230666] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-230666
	    minikube start -p kubernetes-upgrade-230666 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2306662 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-230666 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230666 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.782890895s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-230666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-230666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-230666: (2.996622221s)
--- PASS: TestKubernetesUpgrade (225.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.825015751 start -p missing-upgrade-165139 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.825015751 start -p missing-upgrade-165139 --memory=2200 --driver=docker  --container-runtime=docker: (1m29.142835335s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-165139
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-165139: (10.436219974s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-165139
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-165139 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0910 18:22:55.051925    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-165139 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m8.103164254s)
helpers_test.go:175: Cleaning up "missing-upgrade-165139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-165139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-165139: (2.161805033s)
--- PASS: TestMissingContainerUpgrade (170.81s)

                                                
                                    
x
+
TestPause/serial/Start (83.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-082870 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-082870 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m23.930432154s)
--- PASS: TestPause/serial/Start (83.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-082870 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0910 18:21:21.866475    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-082870 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.728419233s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-082870 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-082870 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-082870 --output=json --layout=cluster: exit status 2 (382.704274ms)

                                                
                                                
-- stdout --
	{"Name":"pause-082870","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-082870","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-082870 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-082870 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-082870 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-082870 --alsologtostderr -v=5: (2.331348694s)
--- PASS: TestPause/serial/DeletePaused (2.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-082870
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-082870: exit status 1 (22.577698ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-082870: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.945190719 start -p stopped-upgrade-646482 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0910 18:24:28.083165    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.090453    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.101858    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.123308    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.164637    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.246070    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.407552    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:28.729241    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:29.370775    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:30.652584    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.945190719 start -p stopped-upgrade-646482 --memory=2200 --vm-driver=docker  --container-runtime=docker: (46.41038011s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.945190719 -p stopped-upgrade-646482 stop
E0910 18:24:33.214369    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.945190719 -p stopped-upgrade-646482 stop: (2.109307231s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-646482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0910 18:24:38.336522    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:24:48.578709    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:25:09.060211    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-646482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.177584216s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-646482
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-646482: (2.137117294s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (111.856646ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-675045] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-675045 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-675045 --driver=docker  --container-runtime=docker: (39.701602869s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-675045 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --driver=docker  --container-runtime=docker: (14.858787516s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-675045 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-675045 status -o json: exit status 2 (400.653349ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-675045","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-675045
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-675045: (1.987986419s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-675045 --no-kubernetes --driver=docker  --container-runtime=docker: (11.68237945s)
--- PASS: TestNoKubernetes/serial/Start (11.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-675045 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-675045 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.027475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (2.487801566s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-675045
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-675045: (1.275440698s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-675045 --driver=docker  --container-runtime=docker
E0910 18:27:55.053463    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-675045 --driver=docker  --container-runtime=docker: (8.729502935s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-675045 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-675045 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.498602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-336913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-336913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m54.297347456s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-336913 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [968cda06-0753-4a9b-8d45-09e712061ecf] Pending
helpers_test.go:344: "busybox" [968cda06-0753-4a9b-8d45-09e712061ecf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [968cda06-0753-4a9b-8d45-09e712061ecf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004412685s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-336913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-336913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-336913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-336913 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-336913 --alsologtostderr -v=3: (10.937618108s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336913 -n old-k8s-version-336913
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336913 -n old-k8s-version-336913: exit status 7 (70.710958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-336913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-336913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-336913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m25.04031955s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336913 -n old-k8s-version-336913
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-041236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:32:55.052092    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:33:18.798399    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-041236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (57.111805877s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-041236 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55de2d3e-8aa9-4fcd-a9ae-45de554affc1] Pending
helpers_test.go:344: "busybox" [55de2d3e-8aa9-4fcd-a9ae-45de554affc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [55de2d3e-8aa9-4fcd-a9ae-45de554affc1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00787956s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-041236 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-041236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-041236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073977804s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-041236 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-041236 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-041236 --alsologtostderr -v=3: (11.060623402s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041236 -n no-preload-041236
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041236 -n no-preload-041236: exit status 7 (77.20469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-041236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-041236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-041236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m27.03869563s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041236 -n no-preload-041236
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-x6xxl" [46186798-e5cf-4d04-a9a0-48a1e9d55760] Running
E0910 18:34:28.083575    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003549935s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-x6xxl" [46186798-e5cf-4d04-a9a0-48a1e9d55760] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003762225s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-336913 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-336913 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-336913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336913 -n old-k8s-version-336913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336913 -n old-k8s-version-336913: exit status 2 (320.279967ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336913 -n old-k8s-version-336913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336913 -n old-k8s-version-336913: exit status 2 (325.859391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-336913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336913 -n old-k8s-version-336913
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336913 -n old-k8s-version-336913
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-529606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-529606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (43.491959173s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-529606 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4683037c-19b1-4cce-b3ed-e4382334ebea] Pending
helpers_test.go:344: "busybox" [4683037c-19b1-4cce-b3ed-e4382334ebea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4683037c-19b1-4cce-b3ed-e4382334ebea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004636232s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-529606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-529606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-529606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064133379s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-529606 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-529606 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-529606 --alsologtostderr -v=3: (11.032255274s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-529606 -n embed-certs-529606
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-529606 -n embed-certs-529606: exit status 7 (77.0662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-529606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (304.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-529606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:36:39.180819    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.187308    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.198720    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.220142    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.261542    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.342934    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.504343    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:39.825604    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:40.467005    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:41.748290    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:44.309794    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:49.431152    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:59.673299    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:20.154816    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:55.068549    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:01.116922    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:01.867877    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:18.798496    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-529606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (5m3.788296699s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-529606 -n embed-certs-529606
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (304.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6wf56" [6fee2ab2-a492-4a1d-9770-c7df658db28f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003432967s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6wf56" [6fee2ab2-a492-4a1d-9770-c7df658db28f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003564981s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-041236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-041236 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-041236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041236 -n no-preload-041236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041236 -n no-preload-041236: exit status 2 (406.315855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-041236 -n no-preload-041236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-041236 -n no-preload-041236: exit status 2 (342.063407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-041236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041236 -n no-preload-041236
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-041236 -n no-preload-041236
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-700327 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:39:23.038946    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:39:28.083530    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-700327 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m19.037994583s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-700327 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [232994a8-856b-4c30-a992-e0f5f85fa0e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [232994a8-856b-4c30-a992-e0f5f85fa0e2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004452097s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-700327 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-700327 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-700327 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-700327 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-700327 --alsologtostderr -v=3: (10.847638634s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327: exit status 7 (65.380179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-700327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-700327 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:40:51.154983    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-700327 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m28.754641619s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lmltw" [30f4efd6-5e34-4a84-a144-97aa92362493] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003370955s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lmltw" [30f4efd6-5e34-4a84-a144-97aa92362493] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003819308s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-529606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-529606 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-529606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-529606 -n embed-certs-529606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-529606 -n embed-certs-529606: exit status 2 (344.354639ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-529606 -n embed-certs-529606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-529606 -n embed-certs-529606: exit status 2 (318.213863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-529606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-529606 -n embed-certs-529606
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-529606 -n embed-certs-529606
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-546083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:41:39.179949    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-546083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (35.717493675s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-546083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-546083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.541808362s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-546083 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-546083 --alsologtostderr -v=3: (9.613761543s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-546083 -n newest-cni-546083
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-546083 -n newest-cni-546083: exit status 7 (83.164521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-546083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-546083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0910 18:42:06.880298    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-546083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (17.7958308s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-546083 -n newest-cni-546083
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-546083 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-546083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-546083 --alsologtostderr -v=1: (1.03945938s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-546083 -n newest-cni-546083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-546083 -n newest-cni-546083: exit status 2 (337.92625ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-546083 -n newest-cni-546083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-546083 -n newest-cni-546083: exit status 2 (369.886183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-546083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-546083 -n newest-cni-546083
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-546083 -n newest-cni-546083
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0910 18:42:38.146555    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:55.060609    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:18.798785    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m17.637510454s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-194714 "pgrep -a kubelet"
E0910 18:43:40.801987    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:40.808749    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:40.820126    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:40.841533    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:40.883605    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-194714 replace --force -f testdata/netcat-deployment.yaml
E0910 18:43:40.965487    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:41.126710    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kwrsk" [c8f754be-4c87-44c0-8d4d-16a3afccf7bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 18:43:41.448359    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:42.090530    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:43.372560    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:45.933856    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kwrsk" [c8f754be-4c87-44c0-8d4d-16a3afccf7bb] Running
E0910 18:43:51.055154    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004567403s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0910 18:44:21.779410    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:28.083121    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/skaffold-447070/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.377350815s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-j5gjq" [ed79eae4-148e-40ca-a5c1-b90a41321370] Running
E0910 18:45:02.741085    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004851559s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-j5gjq" [ed79eae4-148e-40ca-a5c1-b90a41321370] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00445641s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-700327 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-700327 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-700327 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327: exit status 2 (502.560445ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327: exit status 2 (508.569775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-700327 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-700327 -n default-k8s-diff-port-700327
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.26s)
E0910 18:51:25.113329    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:28.282259    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.316166    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.322488    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.333843    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.355210    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.396591    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.477956    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.639402    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:32.961002    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:33.603182    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:34.884493    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:37.445921    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:39.180627    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:42.567379    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:44.206012    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:51:52.809658    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/calico-194714/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m14.533805384s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bmzhw" [b9cbee6b-5039-414d-be2e-e72364755b5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004485991s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bsxdw" [55461949-bd22-48fe-bab4-b525b1a7d510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bsxdw" [55461949-bd22-48fe-bab4-b525b1a7d510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.02216011s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0910 18:46:24.663284    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.534872612s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9krkr" [afd23b12-c310-42d6-b57a-3f28d3adcfa3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005074593s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hwz7k" [4ec93469-4e4f-4df9-8fb1-209f91bbc020] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 18:46:39.180574    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/old-k8s-version-336913/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hwz7k" [4ec93469-4e4f-4df9-8fb1-209f91bbc020] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005190514s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b822z" [3e55b2e7-e91d-4b05-99d2-5ac3700f6c76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b822z" [3e55b2e7-e91d-4b05-99d2-5ac3700f6c76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004460143s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (87.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m27.146099208s)
--- PASS: TestNetworkPlugins/group/false/Start (87.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0910 18:47:55.054556    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/functional-613813/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:18.798898    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:40.801737    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.253263    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.259674    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.271090    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.292563    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.333975    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.415400    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.577615    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:41.899312    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:42.540601    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:43.822890    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m25.003626913s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8vw7s" [f7c1284d-ccfd-4d74-bc51-67bca7d4558d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 18:48:46.384738    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8vw7s" [f7c1284d-ccfd-4d74-bc51-67bca7d4558d] Running
E0910 18:48:51.506232    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00432328s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-194714 replace --force -f testdata/netcat-deployment.yaml
E0910 18:49:08.505273    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/no-preload-041236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pmvc4" [502d46da-bcf1-4f76-96c8-9b28034ded4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pmvc4" [502d46da-bcf1-4f76-96c8-9b28034ded4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005010527s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m3.52166424s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0910 18:50:03.191592    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/auto-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.344899    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.351863    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.363216    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.384827    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.426212    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.507563    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.668906    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:06.990244    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:07.631611    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:08.913605    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:11.475227    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:16.596744    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (56.642969001s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hwlwj" [5ca710c3-7f50-4109-9946-2fb1d1f71362] Running
E0910 18:50:22.265309    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.272090    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.283849    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.305862    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.347772    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.429085    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.591298    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:22.912638    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:23.554612    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:24.836445    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:26.838787    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:50:27.398304    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005307781s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nqtlr" [0198354d-0c24-4f0c-beff-75170281856b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 18:50:32.520331    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nqtlr" [0198354d-0c24-4f0c-beff-75170281856b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003907493s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xs8bk" [80a9d747-bf62-4b57-b19f-de704502c816] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0910 18:50:42.762833    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/kindnet-194714/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xs8bk" [80a9d747-bf62-4b57-b19f-de704502c816] Running
E0910 18:50:47.320371    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/default-k8s-diff-port-700327/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005815043s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-194714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (49.680890142s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-194714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-194714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rblnw" [78758eea-bf56-490b-a32c-16bea55765ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rblnw" [78758eea-bf56-490b-a32c-16bea55765ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004381567s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-194714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-194714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-686092 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-686092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-686092
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-276741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-276741
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-194714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-194714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-194714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-194714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194714"

                                                
                                                
----------------------- debugLogs end: cilium-194714 [took: 5.526258026s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-194714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-194714
--- SKIP: TestNetworkPlugins/group/cilium (5.69s)

                                                
                                    
Copied to clipboard