Test Report: Docker_Linux_docker_arm64 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75.51
x
+
TestAddons/parallel/Registry (75.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.295153ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004201297s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00325487s
addons_test.go:338: (dbg) Run:  kubectl --context addons-193618 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.122557201s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 ip
2024/09/23 10:34:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-193618
helpers_test.go:235: (dbg) docker inspect addons-193618:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8",
	        "Created": "2024-09-23T10:21:10.351973489Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:21:10.522608749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/hosts",
	        "LogPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8-json.log",
	        "Name": "/addons-193618",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-193618:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-193618",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207-init/diff:/var/lib/docker/overlay2/6f03a4ef8a140fe5450018392e20b0528047b3be7fcd35f8ec674bbe5ee3d5d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-193618",
	                "Source": "/var/lib/docker/volumes/addons-193618/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-193618",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-193618",
	                "name.minikube.sigs.k8s.io": "addons-193618",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e9fdd0d3c0d3e742257c4aaf9b9d4dc4c797c56eb7c1ac271bbf53bc2e23b8d",
	            "SandboxKey": "/var/run/docker/netns/1e9fdd0d3c0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-193618": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8d168128c28da18e071830519b7328adca53718bb0836815783db8b049afc06a",
	                    "EndpointID": "10f0b59df9c2bfdc2674a5486ad12df98c281c58155240fee45500c8a048add7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-193618",
	                        "26ced008089d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-193618 -n addons-193618
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 logs -n 25: (1.150115144s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-710688   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-710688              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-710688              | download-only-710688   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -o=json --download-only              | download-only-126776   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-126776              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-126776              | download-only-126776   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-710688              | download-only-710688   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-126776              | download-only-126776   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | --download-only -p                   | download-docker-631157 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | download-docker-631157               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-631157            | download-docker-631157 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | --download-only -p                   | binary-mirror-590765   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | binary-mirror-590765                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33447               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-590765              | binary-mirror-590765   | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| addons  | enable dashboard -p                  | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | addons-193618                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | addons-193618                        |                        |         |         |                     |                     |
	| start   | -p addons-193618 --wait=true         | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-193618 addons disable         | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | -p addons-193618                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-193618 addons disable         | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-193618 addons                 | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-193618 addons                 | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-193618 ip                     | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	| addons  | addons-193618 addons disable         | addons-193618          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:44.962126    8275 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:44.962335    8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:44.962384    8275 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:44.962404    8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:44.962670    8275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:20:44.963127    8275 out.go:352] Setting JSON to false
	I0923 10:20:44.963912    8275 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":193,"bootTime":1727086652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 10:20:44.964006    8275 start.go:139] virtualization:  
	I0923 10:20:44.967538    8275 out.go:177] * [addons-193618] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:20:44.969391    8275 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:20:44.969453    8275 notify.go:220] Checking for updates...
	I0923 10:20:44.972072    8275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:44.974315    8275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:20:44.976239    8275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	I0923 10:20:44.978211    8275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:20:44.980137    8275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:20:44.982367    8275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:45.050981    8275 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:45.051144    8275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:45.195189    8275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:20:45.184256429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:45.195340    8275 docker.go:318] overlay module found
	I0923 10:20:45.197697    8275 out.go:177] * Using the docker driver based on user configuration
	I0923 10:20:45.199831    8275 start.go:297] selected driver: docker
	I0923 10:20:45.199858    8275 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:45.199874    8275 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:20:45.200611    8275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:45.322427    8275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:20:45.310042631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:45.322751    8275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:45.323003    8275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:20:45.325098    8275 out.go:177] * Using Docker driver with root privileges
	I0923 10:20:45.327245    8275 cni.go:84] Creating CNI manager for ""
	I0923 10:20:45.327332    8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:20:45.327345    8275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:20:45.327445    8275 start.go:340] cluster config:
	{Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:45.329874    8275 out.go:177] * Starting "addons-193618" primary control-plane node in "addons-193618" cluster
	I0923 10:20:45.331989    8275 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:20:45.336452    8275 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:20:45.338592    8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:20:45.338680    8275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 10:20:45.338692    8275 cache.go:56] Caching tarball of preloaded images
	I0923 10:20:45.338737    8275 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:20:45.338797    8275 preload.go:172] Found /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 10:20:45.338809    8275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 10:20:45.339241    8275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json ...
	I0923 10:20:45.339315    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json: {Name:mke8b7301d3a5167a1f1aba5f23a929aa585f3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:20:45.360091    8275 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:45.360333    8275 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:20:45.360373    8275 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:20:45.360402    8275 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:20:45.360412    8275 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:20:45.360454    8275 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:21:02.600728    8275 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:21:02.600790    8275 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:21:02.600823    8275 start.go:360] acquireMachinesLock for addons-193618: {Name:mk48dd4aba024ddd995eaf88bfc43ada7e8ca838 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:02.600968    8275 start.go:364] duration metric: took 122.798µs to acquireMachinesLock for "addons-193618"
	I0923 10:21:02.600997    8275 start.go:93] Provisioning new machine with config: &{Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:21:02.601070    8275 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:21:02.604392    8275 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:21:02.604668    8275 start.go:159] libmachine.API.Create for "addons-193618" (driver="docker")
	I0923 10:21:02.604708    8275 client.go:168] LocalClient.Create starting
	I0923 10:21:02.604837    8275 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem
	I0923 10:21:04.047510    8275 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem
	I0923 10:21:04.291954    8275 cli_runner.go:164] Run: docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:21:04.309422    8275 cli_runner.go:211] docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:21:04.309509    8275 network_create.go:284] running [docker network inspect addons-193618] to gather additional debugging logs...
	I0923 10:21:04.309531    8275 cli_runner.go:164] Run: docker network inspect addons-193618
	W0923 10:21:04.324774    8275 cli_runner.go:211] docker network inspect addons-193618 returned with exit code 1
	I0923 10:21:04.324807    8275 network_create.go:287] error running [docker network inspect addons-193618]: docker network inspect addons-193618: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-193618 not found
	I0923 10:21:04.324821    8275 network_create.go:289] output of [docker network inspect addons-193618]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-193618 not found
	
	** /stderr **
	I0923 10:21:04.324924    8275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:04.340876    8275 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004b9240}
	I0923 10:21:04.340930    8275 network_create.go:124] attempt to create docker network addons-193618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:21:04.341047    8275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-193618 addons-193618
	I0923 10:21:04.413242    8275 network_create.go:108] docker network addons-193618 192.168.49.0/24 created
	I0923 10:21:04.413272    8275 kic.go:121] calculated static IP "192.168.49.2" for the "addons-193618" container
	I0923 10:21:04.413357    8275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:21:04.427026    8275 cli_runner.go:164] Run: docker volume create addons-193618 --label name.minikube.sigs.k8s.io=addons-193618 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:21:04.445042    8275 oci.go:103] Successfully created a docker volume addons-193618
	I0923 10:21:04.445146    8275 cli_runner.go:164] Run: docker run --rm --name addons-193618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --entrypoint /usr/bin/test -v addons-193618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:21:06.574836    8275 cli_runner.go:217] Completed: docker run --rm --name addons-193618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --entrypoint /usr/bin/test -v addons-193618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.129645865s)
	I0923 10:21:06.574866    8275 oci.go:107] Successfully prepared a docker volume addons-193618
	I0923 10:21:06.574885    8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:06.574905    8275 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:21:06.574974    8275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-193618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:21:10.282100    8275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-193618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.707083791s)
	I0923 10:21:10.282132    8275 kic.go:203] duration metric: took 3.707225404s to extract preloaded images to volume ...
	W0923 10:21:10.282287    8275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:21:10.282411    8275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:21:10.336323    8275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-193618 --name addons-193618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-193618 --network addons-193618 --ip 192.168.49.2 --volume addons-193618:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:21:10.691915    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Running}}
	I0923 10:21:10.719577    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:10.745328    8275 cli_runner.go:164] Run: docker exec addons-193618 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:21:10.811993    8275 oci.go:144] the created container "addons-193618" has a running status.
	I0923 10:21:10.812024    8275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa...
	I0923 10:21:11.880902    8275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:21:11.910796    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:11.928490    8275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:21:11.928513    8275 kic_runner.go:114] Args: [docker exec --privileged addons-193618 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:21:11.990769    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:12.015654    8275 machine.go:93] provisionDockerMachine start ...
	I0923 10:21:12.015774    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:12.036679    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:12.036993    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:12.037010    8275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:21:12.172736    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-193618
	
	I0923 10:21:12.172763    8275 ubuntu.go:169] provisioning hostname "addons-193618"
	I0923 10:21:12.172834    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:12.190417    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:12.190678    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:12.190696    8275 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-193618 && echo "addons-193618" | sudo tee /etc/hostname
	I0923 10:21:12.338311    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-193618
	
	I0923 10:21:12.338393    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:12.356488    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:12.356737    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:12.356760    8275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-193618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-193618/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-193618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:21:12.493163    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:21:12.493187    8275 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-2206/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-2206/.minikube}
	I0923 10:21:12.493205    8275 ubuntu.go:177] setting up certificates
	I0923 10:21:12.493216    8275 provision.go:84] configureAuth start
	I0923 10:21:12.493272    8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
	I0923 10:21:12.510414    8275 provision.go:143] copyHostCerts
	I0923 10:21:12.510500    8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/ca.pem (1078 bytes)
	I0923 10:21:12.510680    8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/cert.pem (1123 bytes)
	I0923 10:21:12.510744    8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/key.pem (1675 bytes)
	I0923 10:21:12.510797    8275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem org=jenkins.addons-193618 san=[127.0.0.1 192.168.49.2 addons-193618 localhost minikube]
	I0923 10:21:13.296721    8275 provision.go:177] copyRemoteCerts
	I0923 10:21:13.296791    8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:21:13.296832    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:13.314068    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:13.409871    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:21:13.434989    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:21:13.458893    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:21:13.483616    8275 provision.go:87] duration metric: took 990.387218ms to configureAuth
	I0923 10:21:13.483643    8275 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:21:13.483830    8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:13.483894    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:13.504477    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:13.504728    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:13.504747    8275 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 10:21:13.637687    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 10:21:13.637710    8275 ubuntu.go:71] root file system type: overlay
	I0923 10:21:13.637823    8275 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 10:21:13.637893    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:13.655165    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:13.655404    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:13.655484    8275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 10:21:13.800744    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 10:21:13.800836    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:13.817289    8275 main.go:141] libmachine: Using SSH client type: native
	I0923 10:21:13.817574    8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:21:13.817599    8275 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 10:21:14.602184    8275 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:16.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 10:21:13.793820605 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 10:21:14.602265    8275 machine.go:96] duration metric: took 2.586585811s to provisionDockerMachine
	I0923 10:21:14.602293    8275 client.go:171] duration metric: took 11.997574442s to LocalClient.Create
	I0923 10:21:14.602321    8275 start.go:167] duration metric: took 11.997654828s to libmachine.API.Create "addons-193618"
	I0923 10:21:14.602353    8275 start.go:293] postStartSetup for "addons-193618" (driver="docker")
	I0923 10:21:14.602379    8275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:21:14.602466    8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:21:14.602541    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:14.619671    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:14.713987    8275 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:21:14.717161    8275 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:21:14.717197    8275 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:21:14.717210    8275 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:21:14.717230    8275 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:21:14.717244    8275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2206/.minikube/addons for local assets ...
	I0923 10:21:14.717318    8275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2206/.minikube/files for local assets ...
	I0923 10:21:14.717347    8275 start.go:296] duration metric: took 114.974073ms for postStartSetup
	I0923 10:21:14.717658    8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
	I0923 10:21:14.733522    8275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json ...
	I0923 10:21:14.733796    8275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:21:14.733846    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:14.749927    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:14.841805    8275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:21:14.846242    8275 start.go:128] duration metric: took 12.245156029s to createHost
	I0923 10:21:14.846270    8275 start.go:83] releasing machines lock for "addons-193618", held for 12.24528901s
	I0923 10:21:14.846339    8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
	I0923 10:21:14.863090    8275 ssh_runner.go:195] Run: cat /version.json
	I0923 10:21:14.863152    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:14.863403    8275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:21:14.863477    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:14.884145    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:14.894601    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:14.976522    8275 ssh_runner.go:195] Run: systemctl --version
	I0923 10:21:15.112557    8275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:21:15.118103    8275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 10:21:15.150524    8275 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:21:15.150644    8275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:21:15.185900    8275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:21:15.185941    8275 start.go:495] detecting cgroup driver to use...
	I0923 10:21:15.185983    8275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:15.186097    8275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:15.204497    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:21:15.215246    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:21:15.225593    8275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:21:15.225663    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:21:15.235613    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:15.245769    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:21:15.255562    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:15.265489    8275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:21:15.274480    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:21:15.284082    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:21:15.293674    8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:21:15.303210    8275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:21:15.311456    8275 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:21:15.311540    8275 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:21:15.325183    8275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:21:15.333724    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:15.416555    8275 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 10:21:15.518505    8275 start.go:495] detecting cgroup driver to use...
	I0923 10:21:15.518598    8275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:15.518670    8275 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 10:21:15.531840    8275 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 10:21:15.531947    8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 10:21:15.551371    8275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:15.568800    8275 ssh_runner.go:195] Run: which cri-dockerd
	I0923 10:21:15.574906    8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:21:15.584573    8275 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:21:15.612064    8275 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 10:21:15.713676    8275 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 10:21:15.820450    8275 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:21:15.820650    8275 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:21:15.841683    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:15.928564    8275 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 10:21:16.197732    8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:21:16.210185    8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:16.222651    8275 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:21:16.317657    8275 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 10:21:16.409930    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:16.503443    8275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 10:21:16.517391    8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:16.529144    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:16.621950    8275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 10:21:16.701099    8275 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:21:16.701258    8275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 10:21:16.706348    8275 start.go:563] Will wait 60s for crictl version
	I0923 10:21:16.706484    8275 ssh_runner.go:195] Run: which crictl
	I0923 10:21:16.710187    8275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:21:16.745214    8275 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 10:21:16.745286    8275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:21:16.768526    8275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:21:16.793108    8275 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 10:21:16.793212    8275 cli_runner.go:164] Run: docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:16.808755    8275 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:21:16.812130    8275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:21:16.822926    8275 kubeadm.go:883] updating cluster {Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:21:16.823039    8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:16.823091    8275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:21:16.840249    8275 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:21:16.840270    8275 docker.go:615] Images already preloaded, skipping extraction
	I0923 10:21:16.840334    8275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:21:16.858635    8275 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:21:16.858661    8275 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:21:16.858671    8275 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 10:21:16.858765    8275 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-193618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:21:16.858835    8275 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 10:21:16.901788    8275 cni.go:84] Creating CNI manager for ""
	I0923 10:21:16.901822    8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:16.901835    8275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:21:16.901855    8275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-193618 NodeName:addons-193618 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:21:16.902012    8275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-193618"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:21:16.902081    8275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:21:16.911070    8275 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:21:16.911138    8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:21:16.919548    8275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:21:16.936910    8275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:21:16.955024    8275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 10:21:16.972399    8275 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:21:16.976061    8275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:21:16.986584    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:17.073435    8275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:21:17.088710    8275 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618 for IP: 192.168.49.2
	I0923 10:21:17.088780    8275 certs.go:194] generating shared ca certs ...
	I0923 10:21:17.088810    8275 certs.go:226] acquiring lock for ca certs: {Name:mk65c867ec8f333e41d1cce69d234e86fc7ac1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:17.089009    8275 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key
	I0923 10:21:17.353855    8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt ...
	I0923 10:21:17.353889    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt: {Name:mk20e2832fd3e141701b8471b89bb04526400614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:17.354116    8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key ...
	I0923 10:21:17.354132    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key: {Name:mkaef70a9f9e29ad452ad4a00856aea93875efe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:17.354226    8275 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key
	I0923 10:21:18.611693    8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt ...
	I0923 10:21:18.611767    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt: {Name:mk28403ad2f8201297ec9ab70e4e9be5e67739bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:18.612017    8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key ...
	I0923 10:21:18.612055    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key: {Name:mkb3eee3f7c5f6e127c953ada21e2c55ff322612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:18.612176    8275 certs.go:256] generating profile certs ...
	I0923 10:21:18.612262    8275 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key
	I0923 10:21:18.612309    8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt with IP's: []
	I0923 10:21:18.915553    8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt ...
	I0923 10:21:18.915631    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: {Name:mkda59fae36ef039237e0aef270394146815ca53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:18.915846    8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key ...
	I0923 10:21:18.915880    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key: {Name:mk6490ee68106ed62c0f414cc380568ec2388aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:18.915997    8275 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9
	I0923 10:21:18.916038    8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:21:19.267671    8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 ...
	I0923 10:21:19.267703    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9: {Name:mk5d1194dcb297b383390fb12f2169d0bb2be05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:19.267907    8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9 ...
	I0923 10:21:19.267923    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9: {Name:mkc0abb657ec219b0d73783a5b275bbe8b105742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:19.268009    8275 certs.go:381] copying /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 -> /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt
	I0923 10:21:19.268087    8275 certs.go:385] copying /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9 -> /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key
	I0923 10:21:19.268140    8275 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key
	I0923 10:21:19.268160    8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt with IP's: []
	I0923 10:21:19.858385    8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt ...
	I0923 10:21:19.858423    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt: {Name:mk78ce6447041726e1168434088a182b3dcc5c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:19.858630    8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key ...
	I0923 10:21:19.858644    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key: {Name:mk93e48fbce21cbcb230e2f97022950562a913c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:19.858869    8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 10:21:19.858908    8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:21:19.858939    8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:21:19.859004    8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem (1675 bytes)
	I0923 10:21:19.859614    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:21:19.885634    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:21:19.910294    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:21:19.935132    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 10:21:19.959050    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:21:19.983993    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:21:20.017649    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:21:20.047502    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:21:20.072878    8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:21:20.099702    8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:21:20.121570    8275 ssh_runner.go:195] Run: openssl version
	I0923 10:21:20.128087    8275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:21:20.139042    8275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:20.143358    8275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:20.143468    8275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:20.151191    8275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:21:20.162139    8275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:21:20.166795    8275 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:21:20.166897    8275 kubeadm.go:392] StartCluster: {Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:20.167055    8275 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:21:20.186430    8275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:21:20.195704    8275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:21:20.204685    8275 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:21:20.204786    8275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:21:20.214193    8275 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:21:20.214216    8275 kubeadm.go:157] found existing configuration files:
	
	I0923 10:21:20.214269    8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:21:20.223232    8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:21:20.223325    8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:21:20.232039    8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:21:20.241341    8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:21:20.241407    8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:21:20.249992    8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:21:20.259088    8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:21:20.259175    8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:21:20.267787    8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:21:20.277490    8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:21:20.277570    8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:21:20.286035    8275 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:21:20.338218    8275 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:21:20.338548    8275 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:21:20.360314    8275 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:21:20.360391    8275 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 10:21:20.360432    8275 kubeadm.go:310] OS: Linux
	I0923 10:21:20.360486    8275 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:21:20.360538    8275 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:21:20.360590    8275 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:21:20.360642    8275 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:21:20.360694    8275 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:21:20.360746    8275 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:21:20.360798    8275 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:21:20.360849    8275 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:21:20.360899    8275 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:21:20.420853    8275 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:21:20.421069    8275 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:21:20.421203    8275 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:21:20.433346    8275 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:21:20.435868    8275 out.go:235]   - Generating certificates and keys ...
	I0923 10:21:20.435983    8275 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:21:20.436061    8275 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:21:20.946154    8275 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:21:21.450257    8275 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:21:21.973955    8275 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:21:22.345911    8275 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:21:22.657416    8275 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:21:22.657637    8275 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-193618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:21:23.066601    8275 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:21:23.066948    8275 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-193618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:21:23.340197    8275 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:21:23.885150    8275 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:21:24.134994    8275 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:21:24.135315    8275 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:21:24.277655    8275 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:21:24.725288    8275 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:21:25.080394    8275 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:21:25.338873    8275 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:21:25.935738    8275 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:21:25.936462    8275 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:21:25.939474    8275 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:21:25.941700    8275 out.go:235]   - Booting up control plane ...
	I0923 10:21:25.941802    8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:21:25.941877    8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:21:25.942551    8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:21:25.953838    8275 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:21:25.960423    8275 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:21:25.960693    8275 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:21:26.066935    8275 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:21:26.067054    8275 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:21:27.067917    8275 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001091799s
	I0923 10:21:27.068006    8275 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:21:33.070418    8275 kubeadm.go:310] [api-check] The API server is healthy after 6.002383818s
	I0923 10:21:33.094317    8275 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:21:33.110784    8275 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:21:33.135663    8275 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:21:33.135861    8275 kubeadm.go:310] [mark-control-plane] Marking the node addons-193618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:21:33.146632    8275 kubeadm.go:310] [bootstrap-token] Using token: y8cva3.7obprnrgdellylf0
	I0923 10:21:33.148898    8275 out.go:235]   - Configuring RBAC rules ...
	I0923 10:21:33.149044    8275 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:21:33.153445    8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:21:33.161340    8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:21:33.165695    8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:21:33.169722    8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:21:33.175677    8275 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:21:33.478711    8275 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:21:33.943731    8275 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:21:34.478802    8275 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:21:34.479949    8275 kubeadm.go:310] 
	I0923 10:21:34.480020    8275 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:21:34.480026    8275 kubeadm.go:310] 
	I0923 10:21:34.480102    8275 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:21:34.480106    8275 kubeadm.go:310] 
	I0923 10:21:34.480131    8275 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:21:34.480190    8275 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:21:34.480240    8275 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:21:34.480245    8275 kubeadm.go:310] 
	I0923 10:21:34.480298    8275 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:21:34.480303    8275 kubeadm.go:310] 
	I0923 10:21:34.480351    8275 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:21:34.480355    8275 kubeadm.go:310] 
	I0923 10:21:34.480420    8275 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:21:34.480496    8275 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:21:34.480563    8275 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:21:34.480568    8275 kubeadm.go:310] 
	I0923 10:21:34.480650    8275 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:21:34.480726    8275 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:21:34.480731    8275 kubeadm.go:310] 
	I0923 10:21:34.480813    8275 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y8cva3.7obprnrgdellylf0 \
	I0923 10:21:34.480914    8275 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e43ba79877030aa66abb7e0cea888323e4e60db42c6d4031199b0da3893be839 \
	I0923 10:21:34.480935    8275 kubeadm.go:310] 	--control-plane 
	I0923 10:21:34.480964    8275 kubeadm.go:310] 
	I0923 10:21:34.481049    8275 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:21:34.481054    8275 kubeadm.go:310] 
	I0923 10:21:34.481134    8275 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y8cva3.7obprnrgdellylf0 \
	I0923 10:21:34.481234    8275 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e43ba79877030aa66abb7e0cea888323e4e60db42c6d4031199b0da3893be839 
	I0923 10:21:34.484886    8275 kubeadm.go:310] W0923 10:21:20.334263    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:34.485208    8275 kubeadm.go:310] W0923 10:21:20.335589    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:34.485426    8275 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 10:21:34.485534    8275 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:21:34.485553    8275 cni.go:84] Creating CNI manager for ""
	I0923 10:21:34.485569    8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:34.487817    8275 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:21:34.489695    8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:21:34.498372    8275 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:21:34.518833    8275 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:21:34.518992    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:34.519093    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-193618 minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-193618 minikube.k8s.io/primary=true
	I0923 10:21:34.665967    8275 ops.go:34] apiserver oom_adj: -16
	I0923 10:21:34.666099    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:35.166498    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:35.666217    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:36.166927    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:36.666498    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:37.166665    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:37.666207    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:38.167108    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:38.666154    8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:38.772209    8275 kubeadm.go:1113] duration metric: took 4.253271402s to wait for elevateKubeSystemPrivileges
	I0923 10:21:38.772237    8275 kubeadm.go:394] duration metric: took 18.605345541s to StartCluster
	I0923 10:21:38.772253    8275 settings.go:142] acquiring lock: {Name:mk4964809950bdfd828e78cd468eb635fb21d14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:38.772367    8275 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:21:38.772724    8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/kubeconfig: {Name:mkff2b2c053c0153995d92eef0e52da52f6d4736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:38.772895    8275 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:21:38.773045    8275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:21:38.773268    8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:38.773295    8275 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:21:38.773394    8275 addons.go:69] Setting yakd=true in profile "addons-193618"
	I0923 10:21:38.773406    8275 addons.go:234] Setting addon yakd=true in "addons-193618"
	I0923 10:21:38.773455    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.773917    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.774424    8275 addons.go:69] Setting inspektor-gadget=true in profile "addons-193618"
	I0923 10:21:38.774445    8275 addons.go:234] Setting addon inspektor-gadget=true in "addons-193618"
	I0923 10:21:38.774467    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.774535    8275 addons.go:69] Setting metrics-server=true in profile "addons-193618"
	I0923 10:21:38.774555    8275 addons.go:234] Setting addon metrics-server=true in "addons-193618"
	I0923 10:21:38.774579    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.774913    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.775211    8275 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-193618"
	I0923 10:21:38.775233    8275 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-193618"
	I0923 10:21:38.775276    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.775784    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.776261    8275 addons.go:69] Setting registry=true in profile "addons-193618"
	I0923 10:21:38.776283    8275 addons.go:234] Setting addon registry=true in "addons-193618"
	I0923 10:21:38.776306    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.776718    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.779457    8275 addons.go:69] Setting cloud-spanner=true in profile "addons-193618"
	I0923 10:21:38.779511    8275 addons.go:234] Setting addon cloud-spanner=true in "addons-193618"
	I0923 10:21:38.779568    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.780347    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.791916    8275 addons.go:69] Setting storage-provisioner=true in profile "addons-193618"
	I0923 10:21:38.791949    8275 addons.go:234] Setting addon storage-provisioner=true in "addons-193618"
	I0923 10:21:38.791984    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.792463    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.804452    8275 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-193618"
	I0923 10:21:38.807256    8275 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-193618"
	I0923 10:21:38.808725    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.809129    8275 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-193618"
	I0923 10:21:38.809201    8275 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-193618"
	I0923 10:21:38.809248    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.809905    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.816386    8275 addons.go:69] Setting volcano=true in profile "addons-193618"
	I0923 10:21:38.816522    8275 addons.go:234] Setting addon volcano=true in "addons-193618"
	I0923 10:21:38.816557    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.816620    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.816437    8275 addons.go:69] Setting default-storageclass=true in profile "addons-193618"
	I0923 10:21:38.821155    8275 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-193618"
	I0923 10:21:38.824331    8275 addons.go:69] Setting volumesnapshots=true in profile "addons-193618"
	I0923 10:21:38.824381    8275 addons.go:234] Setting addon volumesnapshots=true in "addons-193618"
	I0923 10:21:38.824416    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.825067    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.825462    8275 out.go:177] * Verifying Kubernetes components...
	I0923 10:21:38.816444    8275 addons.go:69] Setting gcp-auth=true in profile "addons-193618"
	I0923 10:21:38.825667    8275 mustload.go:65] Loading cluster: addons-193618
	I0923 10:21:38.825826    8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:38.826045    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.816456    8275 addons.go:69] Setting ingress=true in profile "addons-193618"
	I0923 10:21:38.849148    8275 addons.go:234] Setting addon ingress=true in "addons-193618"
	I0923 10:21:38.849201    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.849657    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.816460    8275 addons.go:69] Setting ingress-dns=true in profile "addons-193618"
	I0923 10:21:38.877240    8275 addons.go:234] Setting addon ingress-dns=true in "addons-193618"
	I0923 10:21:38.877289    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:38.911664    8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:21:38.924519    8275 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:21:38.924604    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.928179    8275 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:38.928201    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:21:38.928270    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:38.960794    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.964651    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:38.987565    8275 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:21:38.990068    8275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.990097    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:21:38.990166    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.009253    8275 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:21:39.013290    8275 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:21:39.013383    8275 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:21:39.013394    8275 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:21:39.013476    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.017305    8275 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:21:39.019294    8275 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:21:39.019337    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:21:39.019409    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.035386    8275 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:21:39.037862    8275 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:21:39.037888    8275 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:21:39.037958    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.060118    8275 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-193618"
	I0923 10:21:39.060161    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:39.060579    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:39.116031    8275 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:21:39.116316    8275 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:21:39.120926    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:39.122731    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:21:39.122936    8275 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:21:39.122974    8275 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:21:39.123048    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.130775    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:21:39.131136    8275 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:39.131150    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:21:39.131213    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.140934    8275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:39.141361    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:21:39.141393    8275 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:21:39.141500    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.149160    8275 addons.go:234] Setting addon default-storageclass=true in "addons-193618"
	I0923 10:21:39.149201    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:39.149612    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:39.167319    8275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:39.170697    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:21:39.172719    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:21:39.174389    8275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:21:39.177067    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:21:39.178028    8275 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:21:39.178048    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:21:39.178120    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.206937    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:21:39.228411    8275 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 10:21:39.246300    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:21:39.249175    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.257183    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.268988    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.274588    8275 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:21:39.278461    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:21:39.279444    8275 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:21:39.279499    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:21:39.279584    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.282996    8275 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 10:21:39.283468    8275 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:21:39.289802    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.290437    8275 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:21:39.292183    8275 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:21:39.292288    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:21:39.292301    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:21:39.292363    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.297176    8275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:39.297195    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:21:39.297255    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.303791    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.304532    8275 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 10:21:39.312771    8275 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:39.312845    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 10:21:39.312935    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.345768    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.360739    8275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:21:39.382430    8275 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:39.382451    8275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:21:39.382614    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:39.393487    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.397078    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.410512    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.427210    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.432291    8275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:21:39.445046    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.453989    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.454856    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.474817    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:39.995523    8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:21:39.995597    8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:21:40.041523    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:40.303253    8275 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:21:40.303278    8275 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:21:40.376892    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:21:40.435838    8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:21:40.435864    8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:21:40.453238    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:40.477108    8275 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:21:40.477131    8275 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:21:40.504517    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:40.507150    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:21:40.507177    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:21:40.577860    8275 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:21:40.577887    8275 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:21:40.607156    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:40.722282    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:21:40.722309    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:21:40.803754    8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:21:40.803780    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:21:40.807195    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:40.811865    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:40.826413    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:21:40.867295    8275 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:40.867321    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:21:40.984712    8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:21:40.984739    8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:21:41.002724    8275 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:21:41.002759    8275 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:21:41.015897    8275 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:21:41.015939    8275 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:21:41.055043    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:21:41.055073    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:21:41.094448    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:41.139429    8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:21:41.139455    8275 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:21:41.173266    8275 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:21:41.173291    8275 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:21:41.230600    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:21:41.230628    8275 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:21:41.233131    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:21:41.233157    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:21:41.236484    8275 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:21:41.236512    8275 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:21:41.415084    8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:41.415110    8275 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:21:41.558974    8275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:21:41.559000    8275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:21:41.603977    8275 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:41.604004    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:21:41.608071    8275 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:41.608142    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:21:41.630693    8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:21:41.630777    8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:21:41.759527    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:41.850800    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:41.925284    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:41.936664    8275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:21:41.936749    8275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:21:41.967950    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:21:41.968030    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:21:42.027553    8275 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.666777731s)
	I0923 10:21:42.027635    8275 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:21:42.028810    8275 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.596492853s)
	I0923 10:21:42.030068    8275 node_ready.go:35] waiting up to 6m0s for node "addons-193618" to be "Ready" ...
	I0923 10:21:42.033202    8275 node_ready.go:49] node "addons-193618" has status "Ready":"True"
	I0923 10:21:42.033291    8275 node_ready.go:38] duration metric: took 3.135439ms for node "addons-193618" to be "Ready" ...
	I0923 10:21:42.033318    8275 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:21:42.046124    8275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:42.324373    8275 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:21:42.324467    8275 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:21:42.442187    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:21:42.442273    8275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:21:42.538799    8275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-193618" context rescaled to 1 replicas
	I0923 10:21:42.553010    8275 pod_ready.go:93] pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:42.553081    8275 pod_ready.go:82] duration metric: took 506.871995ms for pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:42.553107    8275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:42.653045    8275 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:42.653116    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:21:42.800675    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:21:42.800748    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:21:42.986387    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:43.099733    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:21:43.099761    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:21:43.300639    8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:43.300669    8275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:21:43.787337    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:44.560442    8275 pod_ready.go:93] pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:44.560522    8275 pod_ready.go:82] duration metric: took 2.007395385s for pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:44.560552    8275 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:44.576263    8275 pod_ready.go:93] pod "etcd-addons-193618" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:44.576285    8275 pod_ready.go:82] duration metric: took 15.71347ms for pod "etcd-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:44.576296    8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:44.923043    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.881435628s)
	I0923 10:21:46.086178    8275 pod_ready.go:93] pod "kube-apiserver-addons-193618" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:46.086203    8275 pod_ready.go:82] duration metric: took 1.509899436s for pod "kube-apiserver-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.086216    8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.092834    8275 pod_ready.go:93] pod "kube-controller-manager-addons-193618" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:46.092860    8275 pod_ready.go:82] duration metric: took 6.636713ms for pod "kube-controller-manager-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.092874    8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9k229" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.106238    8275 pod_ready.go:93] pod "kube-proxy-9k229" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:46.106266    8275 pod_ready.go:82] duration metric: took 13.384572ms for pod "kube-proxy-9k229" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.106278    8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.174020    8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:21:46.174108    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:46.201210    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:46.435432    8275 pod_ready.go:93] pod "kube-scheduler-addons-193618" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:46.435460    8275 pod_ready.go:82] duration metric: took 329.172367ms for pod "kube-scheduler-addons-193618" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:46.435472    8275 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:47.255605    8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:21:47.277781    8275 addons.go:234] Setting addon gcp-auth=true in "addons-193618"
	I0923 10:21:47.277889    8275 host.go:66] Checking if "addons-193618" exists ...
	I0923 10:21:47.278428    8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
	I0923 10:21:47.311058    8275 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:21:47.311108    8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
	I0923 10:21:47.338658    8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
	I0923 10:21:48.455541    8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:49.823600    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.446670796s)
	I0923 10:21:49.823632    8275 addons.go:475] Verifying addon ingress=true in "addons-193618"
	I0923 10:21:49.826903    8275 out.go:177] * Verifying ingress addon...
	I0923 10:21:49.829658    8275 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:21:49.836036    8275 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:21:49.836060    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:50.335213    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:50.468012    8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:50.873187    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.336548    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.861677    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:51.980069    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.526788305s)
	I0923 10:21:51.980144    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.475605967s)
	I0923 10:21:51.980353    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.373170776s)
	I0923 10:21:51.980385    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.173170005s)
	I0923 10:21:51.980473    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.168586301s)
	I0923 10:21:51.980515    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.154080316s)
	I0923 10:21:51.980551    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.886079662s)
	I0923 10:21:51.980564    8275 addons.go:475] Verifying addon registry=true in "addons-193618"
	I0923 10:21:51.980766    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.22114604s)
	I0923 10:21:51.980788    8275 addons.go:475] Verifying addon metrics-server=true in "addons-193618"
	I0923 10:21:51.980870    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.130044891s)
	W0923 10:21:51.980893    8275 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:51.980914    8275 retry.go:31] will retry after 172.911112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:51.980975    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.055619208s)
	I0923 10:21:51.981291    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.994813747s)
	I0923 10:21:51.983992    8275 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-193618 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:21:51.984096    8275 out.go:177] * Verifying registry addon...
	I0923 10:21:51.986809    8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:21:52.009490    8275 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:21:52.009525    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0923 10:21:52.045303    8275 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 10:21:52.154445    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:52.372282    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:52.490589    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.842144    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:52.853811    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.066433471s)
	I0923 10:21:52.853892    8275 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-193618"
	I0923 10:21:52.854123    8275 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.543043902s)
	I0923 10:21:52.856072    8275 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:21:52.856179    8275 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:52.858463    8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:21:52.860530    8275 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:21:52.862701    8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:21:52.862763    8275 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:21:52.870189    8275 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:21:52.870212    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:52.942508    8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:52.982103    8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:21:52.982172    8275 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:21:52.990943    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.053476    8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:53.053551    8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:21:53.114230    8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:53.335144    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:53.368813    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.491404    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.835044    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:53.864128    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.991519    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:54.338401    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:54.436620    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.537074    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:54.545216    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.390722268s)
	I0923 10:21:54.545297    8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.431043288s)
	I0923 10:21:54.548176    8275 addons.go:475] Verifying addon gcp-auth=true in "addons-193618"
	I0923 10:21:54.551692    8275 out.go:177] * Verifying gcp-auth addon...
	I0923 10:21:54.554246    8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:21:54.557692    8275 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:21:54.833832    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:54.863325    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.990927    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.334876    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:55.363406    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.445277    8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:55.491381    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.834827    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:55.863275    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.991093    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.334171    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:56.435518    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:56.535175    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.834546    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:56.862971    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:56.990447    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:57.334319    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:57.363645    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.442300    8275 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:57.442322    8275 pod_ready.go:82] duration metric: took 11.006842129s for pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:57.442333    8275 pod_ready.go:39] duration metric: took 15.408987414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:21:57.442354    8275 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:21:57.442420    8275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:57.458691    8275 api_server.go:72] duration metric: took 18.685768055s to wait for apiserver process to appear ...
	I0923 10:21:57.458722    8275 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:21:57.458742    8275 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:21:57.466464    8275 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:21:57.467668    8275 api_server.go:141] control plane version: v1.31.1
	I0923 10:21:57.467692    8275 api_server.go:131] duration metric: took 8.96308ms to wait for apiserver health ...
	I0923 10:21:57.467701    8275 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:21:57.477646    8275 system_pods.go:59] 17 kube-system pods found
	I0923 10:21:57.477683    8275 system_pods.go:61] "coredns-7c65d6cfc9-lxqrw" [06a58e53-b760-4639-8a16-e33921af5734] Running
	I0923 10:21:57.477695    8275 system_pods.go:61] "csi-hostpath-attacher-0" [40a6270a-ae77-4da3-8c74-f4fa0beb8093] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:21:57.477703    8275 system_pods.go:61] "csi-hostpath-resizer-0" [cd924f79-8c12-4c51-a6b2-f212b26f8511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:21:57.477712    8275 system_pods.go:61] "csi-hostpathplugin-5fdgw" [2794263a-9a60-4faf-8479-39c29b19318e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:21:57.477717    8275 system_pods.go:61] "etcd-addons-193618" [dca22102-d2ff-44af-8467-f5e43f3f285a] Running
	I0923 10:21:57.477722    8275 system_pods.go:61] "kube-apiserver-addons-193618" [349caafb-07cf-4118-a7d9-9e176b0b0117] Running
	I0923 10:21:57.477733    8275 system_pods.go:61] "kube-controller-manager-addons-193618" [6579d226-8860-47a6-b281-27b083a0eb8c] Running
	I0923 10:21:57.477740    8275 system_pods.go:61] "kube-ingress-dns-minikube" [b87d564f-0e35-4054-9131-fba8d4523e89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 10:21:57.477747    8275 system_pods.go:61] "kube-proxy-9k229" [d5bee045-aa08-4eb9-bd38-15d84b988e75] Running
	I0923 10:21:57.477752    8275 system_pods.go:61] "kube-scheduler-addons-193618" [2328b96b-0fec-4b16-b3a7-8541b1afa7e9] Running
	I0923 10:21:57.477757    8275 system_pods.go:61] "metrics-server-84c5f94fbc-2sqlh" [8f5addb1-a8f0-4eab-a10b-b9726aa3efae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:21:57.477762    8275 system_pods.go:61] "nvidia-device-plugin-daemonset-5mdqb" [aefa91be-a5e1-48f3-a1b2-2499c4661d89] Running
	I0923 10:21:57.477772    8275 system_pods.go:61] "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:21:57.477778    8275 system_pods.go:61] "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:21:57.477787    8275 system_pods.go:61] "snapshot-controller-56fcc65765-ffcj9" [1ce169da-19ba-425a-8cd5-6d3f822f219a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:57.477797    8275 system_pods.go:61] "snapshot-controller-56fcc65765-zb4t6" [159d70bf-1cd6-47ab-9755-77249cf27379] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:57.477807    8275 system_pods.go:61] "storage-provisioner" [77de7aea-ddda-4eeb-8a47-3e6564a3f597] Running
	I0923 10:21:57.477814    8275 system_pods.go:74] duration metric: took 10.106849ms to wait for pod list to return data ...
	I0923 10:21:57.477824    8275 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:21:57.481384    8275 default_sa.go:45] found service account: "default"
	I0923 10:21:57.481409    8275 default_sa.go:55] duration metric: took 3.576285ms for default service account to be created ...
	I0923 10:21:57.481419    8275 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:21:57.490827    8275 system_pods.go:86] 17 kube-system pods found
	I0923 10:21:57.490864    8275 system_pods.go:89] "coredns-7c65d6cfc9-lxqrw" [06a58e53-b760-4639-8a16-e33921af5734] Running
	I0923 10:21:57.490876    8275 system_pods.go:89] "csi-hostpath-attacher-0" [40a6270a-ae77-4da3-8c74-f4fa0beb8093] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:21:57.490884    8275 system_pods.go:89] "csi-hostpath-resizer-0" [cd924f79-8c12-4c51-a6b2-f212b26f8511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:21:57.490891    8275 system_pods.go:89] "csi-hostpathplugin-5fdgw" [2794263a-9a60-4faf-8479-39c29b19318e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:21:57.490899    8275 system_pods.go:89] "etcd-addons-193618" [dca22102-d2ff-44af-8467-f5e43f3f285a] Running
	I0923 10:21:57.490906    8275 system_pods.go:89] "kube-apiserver-addons-193618" [349caafb-07cf-4118-a7d9-9e176b0b0117] Running
	I0923 10:21:57.490916    8275 system_pods.go:89] "kube-controller-manager-addons-193618" [6579d226-8860-47a6-b281-27b083a0eb8c] Running
	I0923 10:21:57.490923    8275 system_pods.go:89] "kube-ingress-dns-minikube" [b87d564f-0e35-4054-9131-fba8d4523e89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 10:21:57.490933    8275 system_pods.go:89] "kube-proxy-9k229" [d5bee045-aa08-4eb9-bd38-15d84b988e75] Running
	I0923 10:21:57.490938    8275 system_pods.go:89] "kube-scheduler-addons-193618" [2328b96b-0fec-4b16-b3a7-8541b1afa7e9] Running
	I0923 10:21:57.490943    8275 system_pods.go:89] "metrics-server-84c5f94fbc-2sqlh" [8f5addb1-a8f0-4eab-a10b-b9726aa3efae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:21:57.490948    8275 system_pods.go:89] "nvidia-device-plugin-daemonset-5mdqb" [aefa91be-a5e1-48f3-a1b2-2499c4661d89] Running
	I0923 10:21:57.490957    8275 system_pods.go:89] "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:21:57.490963    8275 system_pods.go:89] "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:21:57.490972    8275 system_pods.go:89] "snapshot-controller-56fcc65765-ffcj9" [1ce169da-19ba-425a-8cd5-6d3f822f219a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:57.490980    8275 system_pods.go:89] "snapshot-controller-56fcc65765-zb4t6" [159d70bf-1cd6-47ab-9755-77249cf27379] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:57.490987    8275 system_pods.go:89] "storage-provisioner" [77de7aea-ddda-4eeb-8a47-3e6564a3f597] Running
	I0923 10:21:57.490994    8275 system_pods.go:126] duration metric: took 9.569677ms to wait for k8s-apps to be running ...
	I0923 10:21:57.491001    8275 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:21:57.491059    8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:21:57.494399    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:57.513723    8275 system_svc.go:56] duration metric: took 22.712726ms WaitForService to wait for kubelet
	I0923 10:21:57.513749    8275 kubeadm.go:582] duration metric: took 18.740831155s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:57.513768    8275 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:21:57.517393    8275 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 10:21:57.517426    8275 node_conditions.go:123] node cpu capacity is 2
	I0923 10:21:57.517437    8275 node_conditions.go:105] duration metric: took 3.663097ms to run NodePressure ...
	I0923 10:21:57.517450    8275 start.go:241] waiting for startup goroutines ...
	I0923 10:21:57.834553    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:57.863925    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.992141    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:58.334838    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:58.364533    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.491245    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:58.841651    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:58.864020    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.990392    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:59.334130    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:59.363846    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.490424    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:59.833900    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:21:59.864019    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.991073    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:00.336986    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:00.377389    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.493259    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:00.834839    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:00.863239    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.991160    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:01.334941    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:01.363854    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.491877    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:01.833992    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:01.863417    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.991026    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:02.334720    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:02.363130    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.490820    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:02.834139    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:02.863684    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.991305    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:03.335320    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:03.364726    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.491384    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:03.834197    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:03.863806    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.990992    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:04.334528    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:04.363449    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.491507    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:04.834443    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:04.863673    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.991434    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:05.335032    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:05.364054    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.491174    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:05.838134    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:05.866791    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.991074    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:06.334650    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:06.363260    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.491087    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:06.834196    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:06.863856    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.990657    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:07.334212    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:07.363923    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.490791    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:07.834030    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:07.863635    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.990208    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:08.340627    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:08.363909    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.495162    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:08.844799    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:08.865886    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.991259    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:09.334913    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:09.364489    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.492179    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:09.834287    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:09.863836    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.990691    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:10.335909    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:10.363965    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.491321    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:10.834354    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:10.863870    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.990665    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:11.338139    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:11.437651    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.492080    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:11.834179    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:11.863648    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.991165    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:12.334665    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:12.363847    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:12.490727    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:12.834731    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:12.863446    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:12.991112    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:13.335716    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:13.365403    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:13.490848    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:13.834745    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:13.868662    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:13.991175    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:14.335839    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:14.364116    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:14.491378    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:14.834810    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:14.863732    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:14.991242    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:15.334686    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:15.363578    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:15.503755    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:15.834893    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:15.863520    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:15.991778    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:16.334003    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:16.363177    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:16.491536    8275 kapi.go:107] duration metric: took 24.504724609s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:22:16.835712    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:16.863604    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:17.336577    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:17.363615    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:17.834154    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:17.864226    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:18.333728    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:18.368294    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:18.834962    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:18.864067    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:19.334513    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:19.362692    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:19.834047    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:19.863362    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:20.335210    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:20.365751    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:20.834842    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:20.863945    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:21.335280    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:21.364095    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:21.834421    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:21.864506    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:22.336304    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:22.437057    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:22.845295    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:22.946588    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:23.334469    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:23.364818    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:23.834151    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:23.863907    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:24.335640    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:24.365018    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:24.835637    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:24.863922    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:25.334616    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:25.363160    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:25.836341    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:25.864477    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:26.335671    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:26.362842    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:26.837804    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:26.888184    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:27.334593    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:27.363338    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:27.834633    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:27.863527    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:28.336049    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:28.436683    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:28.835040    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:28.863410    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:29.334455    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:29.362853    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:29.834556    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:29.863079    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:30.334539    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:30.362888    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:30.835423    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:30.864913    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:31.350571    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:31.364708    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:31.836234    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:31.863686    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:32.335310    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:32.363521    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:32.901268    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:32.902348    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:33.334932    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:33.363422    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:33.834017    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:33.863942    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:34.334978    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:34.363694    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:34.834469    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:34.863921    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:35.337805    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:35.370555    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:35.835354    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:35.864616    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:36.334497    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.362894    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:36.835348    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.864205    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.334513    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.362996    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.834324    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.863134    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.334826    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.364004    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.837595    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.936058    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.335041    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.366877    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.834621    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.868143    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.338537    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.364425    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.834969    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.863385    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.335679    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.363302    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.834187    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.863425    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.335319    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.364788    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.834405    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.866358    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.334954    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.363683    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.835249    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.863778    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.334768    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.364200    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.834391    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.864242    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.337055    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.438869    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.834059    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.863477    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.335556    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.363519    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.834391    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.864498    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.334127    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.363428    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.834420    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.864198    8275 kapi.go:107] duration metric: took 55.005733241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:22:48.334372    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.833792    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.334405    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.834848    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.334437    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.833805    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.334622    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.834623    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.333772    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.833618    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.334182    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.834466    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.334429    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.834453    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.334925    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.837289    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.334475    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.835734    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.335956    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.833813    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.335003    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.835001    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.334693    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.848414    8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.339532    8275 kapi.go:107] duration metric: took 1m10.509870737s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:23:17.583081    8275 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:23:17.583110    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.058222    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.565684    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.057930    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.565170    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.058144    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.562600    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.058184    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.563485    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.058105    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.562984    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.058117    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.557545    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.058913    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.559000    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.061290    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.564194    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.057446    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.562938    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.058032    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.568467    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.057549    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.562879    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.061221    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.564543    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.062723    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.563293    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.057805    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.557883    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.058402    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.557892    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.059684    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.558123    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.058020    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.563920    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.058645    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.559302    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.058107    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.557961    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.058138    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.559275    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.058159    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.557869    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.058009    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.558419    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.059827    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.564811    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.058064    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.558013    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.058393    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.562593    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.058051    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.563534    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.059198    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.558109    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.059151    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.559779    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.057573    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.563596    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.057981    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.562521    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.058621    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.557311    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.057989    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.557668    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.058407    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.557639    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.059071    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.564119    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.058134    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.558272    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.058968    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.562937    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.058067    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.557496    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.058333    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.563113    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.057990    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.563024    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.058615    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.558372    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.058496    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.563810    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.057536    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.564297    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.062407    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.567010    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.057408    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.564395    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.058561    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.563388    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.059716    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.563073    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.057949    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.557733    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.057589    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.557743    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.057981    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.558610    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.058553    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.559377    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.058605    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.559822    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.058666    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.563120    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.058100    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.557228    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.057443    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.563693    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.058228    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.558041    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.058390    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.563919    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:14.058972    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:14.563907    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:15.058902    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:15.558138    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:16.058275    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:16.558092    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:17.057233    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:17.563521    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:18.059019    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:18.563370    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:19.058236    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:19.562953    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:20.057891    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:20.563590    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:21.058533    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:21.564430    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:22.058102    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:22.557199    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:23.057998    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:23.563140    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:24.057625    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:24.568685    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:25.059507    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:25.572494    8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:26.068931    8275 kapi.go:107] duration metric: took 2m31.51468247s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:26.070789    8275 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-193618 cluster.
	I0923 10:24:26.073140    8275 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:26.075034    8275 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:26.077542    8275 out.go:177] * Enabled addons: storage-provisioner, volcano, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 10:24:26.080471    8275 addons.go:510] duration metric: took 2m47.307162101s for enable addons: enabled=[storage-provisioner volcano nvidia-device-plugin cloud-spanner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 10:24:26.080549    8275 start.go:246] waiting for cluster config update ...
	I0923 10:24:26.080573    8275 start.go:255] writing updated cluster config ...
	I0923 10:24:26.080911    8275 ssh_runner.go:195] Run: rm -f paused
	I0923 10:24:26.418990    8275 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:24:26.421655    8275 out.go:177] * Done! kubectl is now configured to use "addons-193618" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 10:34:05 addons-193618 dockerd[1285]: time="2024-09-23T10:34:05.060074090Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ab650e308a53ab21 traceID=3d1911f9960c1c5dff7b44013d038ccc
	Sep 23 10:34:07 addons-193618 cri-dockerd[1543]: time="2024-09-23T10:34:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7dd0fb4424218647ad76450ce819b6e2768a818c793b254dc16822780e06b36e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 23 10:34:08 addons-193618 cri-dockerd[1543]: time="2024-09-23T10:34:08Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 23 10:34:14 addons-193618 dockerd[1285]: time="2024-09-23T10:34:14.526966721Z" level=info msg="ignoring event" container=69c62e6d82c12b55502fe8e2bea6183dffccde28b0e170b8e9e88665def7c202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:14 addons-193618 dockerd[1285]: time="2024-09-23T10:34:14.656827927Z" level=info msg="ignoring event" container=7dd0fb4424218647ad76450ce819b6e2768a818c793b254dc16822780e06b36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.265928045Z" level=info msg="ignoring event" container=cebc37d9ea93cb3396ae3ba265d533dbfd0b52b14767f8e01d4bac1ec9e537a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.399923896Z" level=info msg="ignoring event" container=0519e6ec8caa54d21c569d413eca7ac4d57cfdfe62536572741ad0149345f976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.424305726Z" level=info msg="ignoring event" container=c4231b0bcaa4db91fa345c05c451f895ab19d5f9146b0b1f1d29bfd82bcddc15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.434854034Z" level=info msg="ignoring event" container=cbc96faceed465711b8023b5fd080450313a1d5d7aace97764e62a30e2ea541d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.436166569Z" level=info msg="ignoring event" container=343f0c241f9138e33dc83a03292ff62c976daff2712140cd37dc4232f4c80196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.477921678Z" level=info msg="ignoring event" container=1caacf97352455224488da816121776194aaa520ad916072f2e05673cd463516 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.484313343Z" level=info msg="ignoring event" container=a26071c9bbe3ad72c95b07dc49aeca3c0cda93eb646a06f5f16ce4be7307bfad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.497712568Z" level=info msg="ignoring event" container=b04a490940e64ba9dd87f08a69da0d8ed51bda9476fbb79317729291f9096636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.516305188Z" level=info msg="ignoring event" container=ef4900f78f8a36ebd1e8bd8736a6416507e46f924f47b66d9a8fe14b78796df5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.755244405Z" level=info msg="ignoring event" container=05109b361043d32d8555a2a8ba7984423435358b7ae2b1adf4ba63b43661b51c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.794759782Z" level=info msg="ignoring event" container=28ec87b625c6cf9e40a79bf1ab55a2d96d87d0bff2daa6669ff97426eb00ffb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:22 addons-193618 dockerd[1285]: time="2024-09-23T10:34:22.960893490Z" level=info msg="ignoring event" container=23efb19617acfb2fb348381ca3647c92ea83c04e3839b4dfd1d484bc876b664b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:22 addons-193618 dockerd[1285]: time="2024-09-23T10:34:22.969881352Z" level=info msg="ignoring event" container=4b8f09461332556f05ddeb809328a18ba0fd447f750ca6375e4b0276185bfb95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:23 addons-193618 dockerd[1285]: time="2024-09-23T10:34:23.151013614Z" level=info msg="ignoring event" container=305ef0589f8af10ff46455e48b58a33bf188a0fd4bda30186ebd1b9fb7ac371a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:23 addons-193618 dockerd[1285]: time="2024-09-23T10:34:23.202896666Z" level=info msg="ignoring event" container=42c778a66a5dc80407c19c0cdf1788f1ac4de163348e7cb24a13439f7d12c4cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:24 addons-193618 dockerd[1285]: time="2024-09-23T10:34:24.735257760Z" level=info msg="ignoring event" container=19415a5860710b57c6d509b4ccc4d94ea14b58e29fa8ff5d2a40962973034565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.439264957Z" level=info msg="ignoring event" container=940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.523752867Z" level=info msg="ignoring event" container=e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.661073131Z" level=info msg="ignoring event" container=5f240d7d405914a9d44232c66324b2cdeacebc85f4261a5b47c4436352b85186 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.790268065Z" level=info msg="ignoring event" container=4449e0de1aefc1e570b7d22b118eadff0325adfbdacea38f3299a5b14ebb453e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	35c68cbc9026a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            45 seconds ago      Exited              gadget                     7                   6b44463987b1e       gadget-st667
	ae8beaa6003c8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   33e079e6764e7       gcp-auth-89d5ffd79-4lsd5
	07a948d5bfd4a       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   343eb29074ed2       ingress-nginx-controller-bc57996ff-xl2d2
	723dd49ff0a10       420193b27261a                                                                                                                11 minutes ago      Exited              patch                      1                   f7e30b019b64c       ingress-nginx-admission-patch-5cd9z
	75d000c997318       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   e00f8cc3dae69       ingress-nginx-admission-create-nk96l
	c6347d14adc8b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   4d464912ccc11       local-path-provisioner-86d989889c-625dr
	76fa155a737d5       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server             0                   400cab5664026       metrics-server-84c5f94fbc-2sqlh
	66e82c3226c55       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   2e6f32a61851b       yakd-dashboard-67d98fc6b-wlb7t
	8b9efef765240       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator     0                   f439c7debd80f       cloud-spanner-emulator-5b584cc74-tq68x
	3d622714d8738       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   87338c8fee98d       kube-ingress-dns-minikube
	721bb42a705a4       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   9bdbb56b3529b       nvidia-device-plugin-daemonset-5mdqb
	ae89bac99e2b0       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   bc31e051d25d7       storage-provisioner
	eaf065857e059       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   0cfe8aaacd67e       coredns-7c65d6cfc9-lxqrw
	9062e83d9da75       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   7883781f0bcb9       kube-proxy-9k229
	3c4822743ab5f       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   52969f4ea4f33       kube-scheduler-addons-193618
	0320e83e104fe       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   cae123d2fb50c       etcd-addons-193618
	49428be737406       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   6e8403b798aad       kube-apiserver-addons-193618
	b899dba6bcc74       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   b312571bd783d       kube-controller-manager-addons-193618
	
	
	==> controller_ingress [07a948d5bfd4] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0923 10:22:59.907541       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0923 10:23:00.882876       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0923 10:23:00.901063       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0923 10:23:00.911135       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0923 10:23:00.932703       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"209a71a3-9bd8-4a8c-b609-07e1111d6bf2", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0923 10:23:00.933526       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"265e10d2-baa3-4c83-8ab2-563c72829e4a", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0923 10:23:00.933555       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"dd8afef1-62f0-4462-a1df-f18a898e5259", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 10:23:02.113272       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 10:23:02.113275       7 nginx.go:317] "Starting NGINX process"
	I0923 10:23:02.113904       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 10:23:02.114185       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 10:23:02.133703       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 10:23:02.133921       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xl2d2"
	I0923 10:23:02.154541       7 controller.go:213] "Backend successfully reloaded"
	I0923 10:23:02.154792       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 10:23:02.154914       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xl2d2", UID:"eb6dd0a7-0a66-4583-84e0-3164dc71b70a", APIVersion:"v1", ResourceVersion:"1276", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0923 10:23:02.161282       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xl2d2" node="addons-193618"
	
	
	==> coredns [eaf065857e05] <==
	[INFO] 10.244.0.5:47700 - 13911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038581s
	[INFO] 10.244.0.5:56106 - 22157 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007021973s
	[INFO] 10.244.0.5:56106 - 45450 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007899381s
	[INFO] 10.244.0.5:47955 - 45600 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049928s
	[INFO] 10.244.0.5:47955 - 17443 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00004311s
	[INFO] 10.244.0.5:47089 - 35257 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000637315s
	[INFO] 10.244.0.5:47089 - 58549 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045695s
	[INFO] 10.244.0.5:39582 - 34582 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046409s
	[INFO] 10.244.0.5:39582 - 11547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044013s
	[INFO] 10.244.0.5:52675 - 4008 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046606s
	[INFO] 10.244.0.5:52675 - 61366 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049215s
	[INFO] 10.244.0.5:45553 - 1439 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001501718s
	[INFO] 10.244.0.5:45553 - 20354 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001559885s
	[INFO] 10.244.0.5:51971 - 59058 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048091s
	[INFO] 10.244.0.5:51971 - 19888 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040772s
	[INFO] 10.244.0.25:37390 - 1808 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000256494s
	[INFO] 10.244.0.25:40077 - 1552 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173186s
	[INFO] 10.244.0.25:36127 - 16327 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088254s
	[INFO] 10.244.0.25:33763 - 50843 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069276s
	[INFO] 10.244.0.25:36605 - 45847 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000775133s
	[INFO] 10.244.0.25:54294 - 56081 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128698s
	[INFO] 10.244.0.25:50045 - 23584 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002552773s
	[INFO] 10.244.0.25:60343 - 48959 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004199974s
	[INFO] 10.244.0.25:55987 - 48606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002017886s
	[INFO] 10.244.0.25:43811 - 45347 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001881944s
	
	
	==> describe nodes <==
	Name:               addons-193618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-193618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-193618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-193618
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:21:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-193618
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:34:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:33:39 +0000   Mon, 23 Sep 2024 10:21:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:33:39 +0000   Mon, 23 Sep 2024 10:21:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:33:39 +0000   Mon, 23 Sep 2024 10:21:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:33:39 +0000   Mon, 23 Sep 2024 10:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-193618
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d998278c96147d49f1ab1e139e6ff1f
	  System UUID:                fa2e08c8-d57f-4dbe-a2dc-d866b9da2af3
	  Boot ID:                    a368a3b9-64b6-4915-adf4-926cc803503e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-5b584cc74-tq68x      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-st667                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-4lsd5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xl2d2    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-lxqrw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-193618                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-193618                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-193618       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9k229                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-193618                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-2sqlh             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-5mdqb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-625dr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-wlb7t              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             588Mi (7%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-193618 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node addons-193618 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-193618 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-193618 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-193618 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-193618 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-193618 event: Registered Node addons-193618 in Controller
	
	
	==> dmesg <==
	[Sep23 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015777] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503278] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.769655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.076197] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [0320e83e104f] <==
	{"level":"info","ts":"2024-09-23T10:21:28.165106Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T10:21:28.165329Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T10:21:28.521000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T10:21:28.521248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T10:21:28.521419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T10:21:28.521534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:28.521722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:28.521856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:28.521960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:21:28.523714Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-193618 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:21:28.523901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:21:28.524320Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:28.525600Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:21:28.529980Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:21:28.535850Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:28.573325Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:28.573489Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:21:28.535877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:21:28.535980Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:21:28.585278Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:21:28.586052Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:21:28.594104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T10:31:29.185095Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1886}
	{"level":"info","ts":"2024-09-23T10:31:29.233313Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1886,"took":"47.406345ms","hash":4170233479,"current-db-size-bytes":8404992,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4845568,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-23T10:31:29.233374Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4170233479,"revision":1886,"compact-revision":-1}
	
	
	==> gcp-auth [ae8beaa6003c] <==
	2024/09/23 10:24:25 GCP Auth Webhook started!
	2024/09/23 10:24:43 Ready to marshal response ...
	2024/09/23 10:24:43 Ready to write response ...
	2024/09/23 10:24:44 Ready to marshal response ...
	2024/09/23 10:24:44 Ready to write response ...
	2024/09/23 10:25:08 Ready to marshal response ...
	2024/09/23 10:25:08 Ready to write response ...
	2024/09/23 10:25:09 Ready to marshal response ...
	2024/09/23 10:25:09 Ready to write response ...
	2024/09/23 10:25:09 Ready to marshal response ...
	2024/09/23 10:25:09 Ready to write response ...
	2024/09/23 10:33:13 Ready to marshal response ...
	2024/09/23 10:33:13 Ready to write response ...
	2024/09/23 10:33:13 Ready to marshal response ...
	2024/09/23 10:33:13 Ready to write response ...
	2024/09/23 10:33:13 Ready to marshal response ...
	2024/09/23 10:33:13 Ready to write response ...
	2024/09/23 10:33:24 Ready to marshal response ...
	2024/09/23 10:33:24 Ready to write response ...
	2024/09/23 10:33:48 Ready to marshal response ...
	2024/09/23 10:33:48 Ready to write response ...
	2024/09/23 10:34:07 Ready to marshal response ...
	2024/09/23 10:34:07 Ready to write response ...
	
	
	==> kernel <==
	 10:34:26 up 16 min,  0 users,  load average: 0.63, 0.66, 0.56
	Linux addons-193618 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [49428be73740] <==
	I0923 10:24:59.308636       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 10:24:59.328905       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:24:59.762198       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 10:24:59.798898       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 10:24:59.991659       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 10:25:00.157080       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 10:25:00.342637       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 10:25:00.446311       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 10:25:00.494214       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 10:25:00.621112       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 10:25:01.035216       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 10:25:01.493408       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 10:33:13.159718       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.142.64"}
	I0923 10:33:55.441717       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:34:22.627220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:22.627261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:22.701174       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:22.701451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:22.723308       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:22.725420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:22.822030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:22.822084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:34:23.702043       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:34:23.822693       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:34:23.827246       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [b899dba6bcc7] <==
	E0923 10:34:02.960399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:05.219955       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:05.220066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:06.270642       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:06.270692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:13.920970       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:13.921016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:16.167482       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0923 10:34:16.262485       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0923 10:34:16.543639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-193618"
	W0923 10:34:20.839685       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:20.839729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:22.863195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="12.734µs"
	E0923 10:34:23.703914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:34:23.824563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:34:23.829133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:24.584753       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:24.584803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:24.629635       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:24.629683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:25.066531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:25.066582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:25.349340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.041µs"
	W0923 10:34:26.694219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:26.694265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9062e83d9da7] <==
	I0923 10:21:40.158856       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:21:40.370985       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:21:40.371050       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:21:40.394670       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:21:40.394752       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:21:40.410170       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:21:40.410501       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:21:40.410516       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:21:40.412009       1 config.go:199] "Starting service config controller"
	I0923 10:21:40.412036       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:21:40.412100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:21:40.412106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:21:40.413453       1 config.go:328] "Starting node config controller"
	I0923 10:21:40.413475       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:21:40.512993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:21:40.513067       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:21:40.513673       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c4822743ab5] <==
	E0923 10:21:31.525349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.522341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:31.525559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0923 10:21:31.525708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.335928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:21:32.335968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.420388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:21:32.420494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.472549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:32.473322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.473112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:21:32.473674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.538733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:21:32.539020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.591925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:21:32.592190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.636913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:21:32.636985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.676548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:21:32.676594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.682970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:32.683025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.683097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:32.683115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 10:21:33.105064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.438872    2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159d70bf-1cd6-47ab-9755-77249cf27379-kube-api-access-w7cbm" (OuterVolumeSpecName: "kube-api-access-w7cbm") pod "159d70bf-1cd6-47ab-9755-77249cf27379" (UID: "159d70bf-1cd6-47ab-9755-77249cf27379"). InnerVolumeSpecName "kube-api-access-w7cbm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.536463    2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w7cbm\" (UniqueName: \"kubernetes.io/projected/159d70bf-1cd6-47ab-9755-77249cf27379-kube-api-access-w7cbm\") on node \"addons-193618\" DevicePath \"\""
	Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.834625    2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159d70bf-1cd6-47ab-9755-77249cf27379" path="/var/lib/kubelet/pods/159d70bf-1cd6-47ab-9755-77249cf27379/volumes"
	Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.835016    2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce169da-19ba-425a-8cd5-6d3f822f219a" path="/var/lib/kubelet/pods/1ce169da-19ba-425a-8cd5-6d3f822f219a/volumes"
	Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946158    2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xh2\" (UniqueName: \"kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2\") pod \"e65092e7-6746-42c0-a92c-d40091668e67\" (UID: \"e65092e7-6746-42c0-a92c-d40091668e67\") "
	Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946248    2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds\") pod \"e65092e7-6746-42c0-a92c-d40091668e67\" (UID: \"e65092e7-6746-42c0-a92c-d40091668e67\") "
	Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946445    2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e65092e7-6746-42c0-a92c-d40091668e67" (UID: "e65092e7-6746-42c0-a92c-d40091668e67"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.949506    2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2" (OuterVolumeSpecName: "kube-api-access-z6xh2") pod "e65092e7-6746-42c0-a92c-d40091668e67" (UID: "e65092e7-6746-42c0-a92c-d40091668e67"). InnerVolumeSpecName "kube-api-access-z6xh2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.047855    2328 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds\") on node \"addons-193618\" DevicePath \"\""
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.047899    2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z6xh2\" (UniqueName: \"kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2\") on node \"addons-193618\" DevicePath \"\""
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.859331    2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzstt\" (UniqueName: \"kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt\") pod \"638f01b9-2726-41db-a1a9-43e4bf4d8443\" (UID: \"638f01b9-2726-41db-a1a9-43e4bf4d8443\") "
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.862312    2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt" (OuterVolumeSpecName: "kube-api-access-zzstt") pod "638f01b9-2726-41db-a1a9-43e4bf4d8443" (UID: "638f01b9-2726-41db-a1a9-43e4bf4d8443"). InnerVolumeSpecName "kube-api-access-zzstt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.873570    2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65092e7-6746-42c0-a92c-d40091668e67" path="/var/lib/kubelet/pods/e65092e7-6746-42c0-a92c-d40091668e67/volumes"
	Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.959725    2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zzstt\" (UniqueName: \"kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt\") on node \"addons-193618\" DevicePath \"\""
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.060172    2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpzfq\" (UniqueName: \"kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq\") pod \"cab49b7f-8d32-4017-9de8-d55b0ce0e2f3\" (UID: \"cab49b7f-8d32-4017-9de8-d55b0ce0e2f3\") "
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.062765    2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq" (OuterVolumeSpecName: "kube-api-access-qpzfq") pod "cab49b7f-8d32-4017-9de8-d55b0ce0e2f3" (UID: "cab49b7f-8d32-4017-9de8-d55b0ce0e2f3"). InnerVolumeSpecName "kube-api-access-qpzfq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.161197    2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qpzfq\" (UniqueName: \"kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq\") on node \"addons-193618\" DevicePath \"\""
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.411668    2328 scope.go:117] "RemoveContainer" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.476587    2328 scope.go:117] "RemoveContainer" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: E0923 10:34:26.477720    2328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.477772    2328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"} err="failed to get container status \"940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.477798    2328 scope.go:117] "RemoveContainer" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.516649    2328 scope.go:117] "RemoveContainer" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: E0923 10:34:26.517945    2328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
	Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.517992    2328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"} err="failed to get container status \"e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340\": rpc error: code = Unknown desc = Error response from daemon: No such container: e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
	
	
	==> storage-provisioner [ae89bac99e2b] <==
	I0923 10:21:46.364192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:21:46.380027       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:21:46.380123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:21:46.395620       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:21:46.400566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507!
	I0923 10:21:46.414459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3680dca0-c009-45d8-b484-66aad8e6eddc", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507 became leader
	I0923 10:21:46.501636       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507!
	E0923 10:34:15.210156       1 controller.go:1050] claim "bf1b17f7-808d-4288-b9cf-1d8eb86ef59c" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-193618 -n addons-193618
helpers_test.go:261: (dbg) Run:  kubectl --context addons-193618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z: exit status 1 (114.9274ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-193618/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:25:09 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml7d2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ml7d2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-193618
	  Normal   Pulling    7m45s (x4 over 9m18s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m18s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m18s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nk96l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5cd9z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.51s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.21
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.38
18 TestDownloadOnly/v1.31.1/DeleteAll 0.35
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.54
22 TestOffline 88.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 221.52
29 TestAddons/serial/Volcano 42.28
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 20.85
35 TestAddons/parallel/InspektorGadget 10.84
36 TestAddons/parallel/MetricsServer 6.69
38 TestAddons/parallel/CSI 54.07
39 TestAddons/parallel/Headlamp 16.57
40 TestAddons/parallel/CloudSpanner 6.05
41 TestAddons/parallel/LocalPath 10.85
42 TestAddons/parallel/NvidiaDevicePlugin 5.46
43 TestAddons/parallel/Yakd 10.67
44 TestAddons/StoppedEnableDisable 11.4
45 TestCertOptions 37.93
46 TestCertExpiration 256.41
47 TestDockerFlags 41.66
48 TestForceSystemdFlag 43.45
49 TestForceSystemdEnv 43.17
55 TestErrorSpam/setup 30.56
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 1.1
58 TestErrorSpam/pause 1.34
59 TestErrorSpam/unpause 1.49
60 TestErrorSpam/stop 1.99
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 44.71
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.85
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
72 TestFunctional/serial/CacheCmd/cache/add_local 0.96
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 42.19
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.19
83 TestFunctional/serial/LogsFileCmd 1.52
84 TestFunctional/serial/InvalidService 4.83
86 TestFunctional/parallel/ConfigCmd 0.42
87 TestFunctional/parallel/DashboardCmd 9.9
88 TestFunctional/parallel/DryRun 0.6
89 TestFunctional/parallel/InternationalLanguage 0.25
90 TestFunctional/parallel/StatusCmd 1.33
94 TestFunctional/parallel/ServiceCmdConnect 12.8
95 TestFunctional/parallel/AddonsCmd 0.2
96 TestFunctional/parallel/PersistentVolumeClaim 30.64
98 TestFunctional/parallel/SSHCmd 0.67
99 TestFunctional/parallel/CpCmd 2.23
101 TestFunctional/parallel/FileSync 0.26
102 TestFunctional/parallel/CertSync 2.11
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
110 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.32
123 TestFunctional/parallel/ServiceCmd/List 0.49
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
127 TestFunctional/parallel/ServiceCmd/Format 0.48
128 TestFunctional/parallel/ProfileCmd/profile_list 0.53
129 TestFunctional/parallel/ServiceCmd/URL 0.49
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
131 TestFunctional/parallel/MountCmd/any-port 8.27
132 TestFunctional/parallel/MountCmd/specific-port 2.22
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.62
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.12
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
141 TestFunctional/parallel/ImageCommands/Setup 0.68
142 TestFunctional/parallel/DockerEnv/bash 1.33
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 119.91
160 TestMultiControlPlane/serial/DeployApp 7.7
161 TestMultiControlPlane/serial/PingHostFromPods 1.77
162 TestMultiControlPlane/serial/AddWorkerNode 24.03
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
165 TestMultiControlPlane/serial/CopyFile 19.32
166 TestMultiControlPlane/serial/StopSecondaryNode 11.79
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
168 TestMultiControlPlane/serial/RestartSecondaryNode 53.99
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 246.82
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.44
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
173 TestMultiControlPlane/serial/StopCluster 32.74
174 TestMultiControlPlane/serial/RestartCluster 88.6
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
176 TestMultiControlPlane/serial/AddSecondaryNode 44.62
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
180 TestImageBuild/serial/Setup 34.51
181 TestImageBuild/serial/NormalBuild 2.01
182 TestImageBuild/serial/BuildWithBuildArg 1.38
183 TestImageBuild/serial/BuildWithDockerIgnore 1.02
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.71
188 TestJSONOutput/start/Command 41.83
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 1.15
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.55
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.95
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.22
213 TestKicCustomNetwork/create_custom_network 32.22
214 TestKicCustomNetwork/use_default_bridge_network 37.65
215 TestKicExistingNetwork 37.7
216 TestKicCustomSubnet 33.14
217 TestKicStaticIP 32.74
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 71.43
222 TestMountStart/serial/StartWithMountFirst 10.73
223 TestMountStart/serial/VerifyMountFirst 0.24
224 TestMountStart/serial/StartWithMountSecond 7.43
225 TestMountStart/serial/VerifyMountSecond 0.25
226 TestMountStart/serial/DeleteFirst 1.47
227 TestMountStart/serial/VerifyMountPostDelete 0.26
228 TestMountStart/serial/Stop 1.2
229 TestMountStart/serial/RestartStopped 8.08
230 TestMountStart/serial/VerifyMountPostStop 0.26
233 TestMultiNode/serial/FreshStart2Nodes 74.68
234 TestMultiNode/serial/DeployApp2Nodes 37.75
235 TestMultiNode/serial/PingHostFrom2Pods 1.01
236 TestMultiNode/serial/AddNode 19.06
237 TestMultiNode/serial/MultiNodeLabels 0.1
238 TestMultiNode/serial/ProfileList 0.67
239 TestMultiNode/serial/CopyFile 10.11
240 TestMultiNode/serial/StopNode 2.23
241 TestMultiNode/serial/StartAfterStop 10.7
242 TestMultiNode/serial/RestartKeepsNodes 118
243 TestMultiNode/serial/DeleteNode 5.56
244 TestMultiNode/serial/StopMultiNode 21.64
245 TestMultiNode/serial/RestartMultiNode 52.82
246 TestMultiNode/serial/ValidateNameConflict 34.51
251 TestPreload 170
253 TestScheduledStopUnix 106.92
254 TestSkaffold 118.33
256 TestInsufficientStorage 10.79
257 TestRunningBinaryUpgrade 84.03
259 TestKubernetesUpgrade 388.02
260 TestMissingContainerUpgrade 119.75
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 42.62
264 TestNoKubernetes/serial/StartWithStopK8s 18.11
265 TestNoKubernetes/serial/Start 9.5
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
267 TestNoKubernetes/serial/ProfileList 1.07
268 TestNoKubernetes/serial/Stop 1.22
269 TestNoKubernetes/serial/StartNoArgs 7.81
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
282 TestStoppedBinaryUpgrade/Setup 0.53
283 TestStoppedBinaryUpgrade/Upgrade 127.34
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
293 TestPause/serial/Start 44.41
294 TestPause/serial/SecondStartNoReconfiguration 34
295 TestPause/serial/Pause 0.6
296 TestPause/serial/VerifyStatus 0.33
297 TestPause/serial/Unpause 0.74
298 TestPause/serial/PauseAgain 0.79
299 TestPause/serial/DeletePaused 2.16
300 TestPause/serial/VerifyDeletedResources 0.49
301 TestNetworkPlugins/group/auto/Start 76.17
302 TestNetworkPlugins/group/auto/KubeletFlags 0.36
303 TestNetworkPlugins/group/auto/NetCatPod 11.4
304 TestNetworkPlugins/group/auto/DNS 0.22
305 TestNetworkPlugins/group/auto/Localhost 0.16
306 TestNetworkPlugins/group/auto/HairPin 0.15
307 TestNetworkPlugins/group/kindnet/Start 74.74
308 TestNetworkPlugins/group/calico/Start 79.3
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
311 TestNetworkPlugins/group/kindnet/NetCatPod 12.4
312 TestNetworkPlugins/group/kindnet/DNS 0.34
313 TestNetworkPlugins/group/kindnet/Localhost 0.28
314 TestNetworkPlugins/group/kindnet/HairPin 0.31
315 TestNetworkPlugins/group/calico/ControllerPod 6.02
316 TestNetworkPlugins/group/custom-flannel/Start 62.06
317 TestNetworkPlugins/group/calico/KubeletFlags 0.39
318 TestNetworkPlugins/group/calico/NetCatPod 12.34
319 TestNetworkPlugins/group/calico/DNS 0.28
320 TestNetworkPlugins/group/calico/Localhost 0.21
321 TestNetworkPlugins/group/calico/HairPin 0.21
322 TestNetworkPlugins/group/false/Start 88.25
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
325 TestNetworkPlugins/group/custom-flannel/DNS 0.29
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
328 TestNetworkPlugins/group/enable-default-cni/Start 79.19
329 TestNetworkPlugins/group/false/KubeletFlags 0.4
330 TestNetworkPlugins/group/false/NetCatPod 11.39
331 TestNetworkPlugins/group/false/DNS 0.25
332 TestNetworkPlugins/group/false/Localhost 0.19
333 TestNetworkPlugins/group/false/HairPin 0.16
334 TestNetworkPlugins/group/flannel/Start 59.38
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.37
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.35
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
340 TestNetworkPlugins/group/bridge/Start 71.71
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
343 TestNetworkPlugins/group/flannel/NetCatPod 11.34
344 TestNetworkPlugins/group/flannel/DNS 0.29
345 TestNetworkPlugins/group/flannel/Localhost 0.18
346 TestNetworkPlugins/group/flannel/HairPin 0.16
347 TestNetworkPlugins/group/kubenet/Start 79.07
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
349 TestNetworkPlugins/group/bridge/NetCatPod 10.44
350 TestNetworkPlugins/group/bridge/DNS 0.25
351 TestNetworkPlugins/group/bridge/Localhost 0.4
352 TestNetworkPlugins/group/bridge/HairPin 0.3
354 TestStartStop/group/old-k8s-version/serial/FirstStart 154.08
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.7
356 TestNetworkPlugins/group/kubenet/NetCatPod 13.38
357 TestNetworkPlugins/group/kubenet/DNS 0.17
358 TestNetworkPlugins/group/kubenet/Localhost 0.18
359 TestNetworkPlugins/group/kubenet/HairPin 0.16
361 TestStartStop/group/no-preload/serial/FirstStart 51.27
362 TestStartStop/group/no-preload/serial/DeployApp 9.38
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
364 TestStartStop/group/no-preload/serial/Stop 11.05
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/no-preload/serial/SecondStart 269.83
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.79
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
369 TestStartStop/group/old-k8s-version/serial/Stop 11.09
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/old-k8s-version/serial/SecondStart 135.86
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
375 TestStartStop/group/old-k8s-version/serial/Pause 2.86
377 TestStartStop/group/embed-certs/serial/FirstStart 73.72
378 TestStartStop/group/embed-certs/serial/DeployApp 9.37
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
381 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
382 TestStartStop/group/embed-certs/serial/Stop 12.7
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
384 TestStartStop/group/no-preload/serial/Pause 2.72
386 TestStartStop/group/newest-cni/serial/FirstStart 43.75
387 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
388 TestStartStop/group/embed-certs/serial/SecondStart 294.58
389 TestStartStop/group/newest-cni/serial/DeployApp 0
390 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
391 TestStartStop/group/newest-cni/serial/Stop 11.03
392 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/newest-cni/serial/SecondStart 17.65
394 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
397 TestStartStop/group/newest-cni/serial/Pause 3.15
399 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.57
400 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
402 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.1
403 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
404 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.67
405 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
407 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
408 TestStartStop/group/embed-certs/serial/Pause 2.85
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
x
+
TestDownloadOnly/v1.20.0/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-710688 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-710688 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.234963513s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:20:37.762086    7519 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 10:20:37.762171    7519 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-710688
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-710688: exit status 85 (65.520734ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |          |
	|         | -p download-only-710688        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:31.574569    7524 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:31.574758    7524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:31.574784    7524 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:31.574806    7524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:31.575083    7524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	W0923 10:20:31.575259    7524 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-2206/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-2206/.minikube/config/config.json: no such file or directory
	I0923 10:20:31.575721    7524 out.go:352] Setting JSON to true
	I0923 10:20:31.576533    7524 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":179,"bootTime":1727086652,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 10:20:31.576638    7524 start.go:139] virtualization:  
	I0923 10:20:31.580096    7524 out.go:97] [download-only-710688] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 10:20:31.580255    7524 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:20:31.580289    7524 notify.go:220] Checking for updates...
	I0923 10:20:31.582501    7524 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:31.584428    7524 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:31.586616    7524 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:20:31.588828    7524 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	I0923 10:20:31.590757    7524 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 10:20:31.594317    7524 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:31.594566    7524 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:31.613947    7524 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:31.614048    7524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:31.919698    7524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:20:31.910069649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:31.919812    7524 docker.go:318] overlay module found
	I0923 10:20:31.921827    7524 out.go:97] Using the docker driver based on user configuration
	I0923 10:20:31.921851    7524 start.go:297] selected driver: docker
	I0923 10:20:31.921858    7524 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:31.921964    7524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:31.982063    7524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:20:31.973243663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:31.982266    7524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:31.982586    7524 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 10:20:31.982764    7524 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:31.985374    7524 out.go:169] Using Docker driver with root privileges
	I0923 10:20:31.987283    7524 cni.go:84] Creating CNI manager for ""
	I0923 10:20:31.987352    7524 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 10:20:31.987432    7524 start.go:340] cluster config:
	{Name:download-only-710688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-710688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:31.989456    7524 out.go:97] Starting "download-only-710688" primary control-plane node in "download-only-710688" cluster
	I0923 10:20:31.989485    7524 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:20:31.991378    7524 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:20:31.991403    7524 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:31.991554    7524 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:20:32.012210    7524 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:32.012421    7524 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:20:32.012532    7524 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:32.045658    7524 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 10:20:32.045685    7524 cache.go:56] Caching tarball of preloaded images
	I0923 10:20:32.045851    7524 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:32.048064    7524 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:20:32.048096    7524 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 10:20:32.130081    7524 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 10:20:36.170842    7524 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 10:20:36.171005    7524 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 10:20:36.543050    7524 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	
	
	* The control-plane node download-only-710688 host does not exist
	  To start a cluster, run: "minikube start -p download-only-710688"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-710688
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-126776 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-126776 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.209201899s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:20:42.379660    7519 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:20:42.379697    7519 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-126776
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-126776: exit status 85 (379.858087ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-710688        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p download-only-710688        | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-126776 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-126776        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:38.218693    7721 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:38.218837    7721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:38.218870    7721 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:38.218884    7721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:38.219138    7721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:20:38.219585    7721 out.go:352] Setting JSON to true
	I0923 10:20:38.220357    7721 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":186,"bootTime":1727086652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 10:20:38.220438    7721 start.go:139] virtualization:  
	I0923 10:20:38.223117    7721 out.go:97] [download-only-126776] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:20:38.223320    7721 notify.go:220] Checking for updates...
	I0923 10:20:38.225627    7721 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:38.227822    7721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:38.229961    7721 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:20:38.231927    7721 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	I0923 10:20:38.233683    7721 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 10:20:38.237586    7721 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:38.237828    7721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:38.266659    7721 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:20:38.266812    7721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:38.329267    7721 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:20:38.319994893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:38.329379    7721 docker.go:318] overlay module found
	I0923 10:20:38.331452    7721 out.go:97] Using the docker driver based on user configuration
	I0923 10:20:38.331480    7721 start.go:297] selected driver: docker
	I0923 10:20:38.331487    7721 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:38.331588    7721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:38.375975    7721 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:20:38.366683521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:20:38.376196    7721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:38.376481    7721 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 10:20:38.376636    7721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:38.378772    7721 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-126776 host does not exist
	  To start a cluster, run: "minikube start -p download-only-126776"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-126776
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:20:44.313932    7519 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-590765 --alsologtostderr --binary-mirror http://127.0.0.1:33447 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-590765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-590765
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (88.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-198240 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-198240 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m26.424044148s)
helpers_test.go:175: Cleaning up "offline-docker-198240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-198240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-198240: (2.259464602s)
--- PASS: TestOffline (88.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-193618
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-193618: exit status 85 (61.280114ms)

                                                
                                                
-- stdout --
	* Profile "addons-193618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-193618
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-193618: exit status 85 (65.555468ms)

                                                
                                                
-- stdout --
	* Profile "addons-193618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-193618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-193618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.51452365s)
--- PASS: TestAddons/Setup (221.52s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 56.868074ms
addons_test.go:835: volcano-scheduler stabilized in 57.008612ms
addons_test.go:843: volcano-admission stabilized in 57.050828ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-lszp8" [d8c9fe05-f317-4f32-a431-a82c67a111f5] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.006087718s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-g6xlm" [c394f378-8409-4303-bb75-52c934abc242] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004200727s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-btqgf" [971a940e-c4e5-4910-b396-58a64e7b33c8] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00347857s
addons_test.go:870: (dbg) Run:  kubectl --context addons-193618 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-193618 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-193618 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ec0bc86f-623e-4869-8135-4987d8eeaee5] Pending
helpers_test.go:344: "test-job-nginx-0" [ec0bc86f-623e-4869-8135-4987d8eeaee5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ec0bc86f-623e-4869-8135-4987d8eeaee5] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003736582s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable volcano --alsologtostderr -v=1: (10.545448998s)
--- PASS: TestAddons/serial/Volcano (42.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-193618 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-193618 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-193618 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-193618 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-193618 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [103b2de5-3615-43c2-a2ea-282d19a952dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [103b2de5-3615-43c2-a2ea-282d19a952dc] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003369645s
I0923 10:34:40.332328    7519 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-193618 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable ingress-dns --alsologtostderr -v=1: (1.547677961s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable ingress --alsologtostderr -v=1: (7.724065592s)
--- PASS: TestAddons/parallel/Ingress (20.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-st667" [a2554b0d-3332-45fe-a24a-25c0ca13fff4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004632728s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-193618
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-193618: (5.828644839s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.073178ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2sqlh" [8f5addb1-a8f0-4eab-a10b-b9726aa3efae] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00461952s
addons_test.go:413: (dbg) Run:  kubectl --context addons-193618 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 10:33:28.939170    7519 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:33:28.944847    7519 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:33:28.944884    7519 kapi.go:107] duration metric: took 7.708565ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.718575ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-193618 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-193618 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8637eaec-9c6e-4ff8-abb0-6158226c9118] Pending
helpers_test.go:344: "task-pv-pod" [8637eaec-9c6e-4ff8-abb0-6158226c9118] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8637eaec-9c6e-4ff8-abb0-6158226c9118] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004188598s
addons_test.go:528: (dbg) Run:  kubectl --context addons-193618 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-193618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-193618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-193618 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-193618 delete pod task-pv-pod: (1.241597804s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-193618 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-193618 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-193618 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [42af8046-c072-4c60-b83b-8ab4113cdfe3] Pending
helpers_test.go:344: "task-pv-pod-restore" [42af8046-c072-4c60-b83b-8ab4113cdfe3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [42af8046-c072-4c60-b83b-8ab4113cdfe3] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004754152s
addons_test.go:570: (dbg) Run:  kubectl --context addons-193618 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-193618 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-193618 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.677817186s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-193618 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-g9l4m" [edb3d92f-eb82-45a5-affa-d3e42fc98030] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-g9l4m" [edb3d92f-eb82-45a5-affa-d3e42fc98030] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-g9l4m" [edb3d92f-eb82-45a5-affa-d3e42fc98030] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004402853s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable headlamp --alsologtostderr -v=1: (5.691848399s)
--- PASS: TestAddons/parallel/Headlamp (16.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-tq68x" [16f6c188-caad-4ae9-ad63-b8a6a9415fdd] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004408244s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-193618
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-193618: (1.039475021s)
--- PASS: TestAddons/parallel/CloudSpanner (6.05s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-193618 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-193618 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [84c6ebdb-9be3-4b96-9347-1aee9dacce23] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [84c6ebdb-9be3-4b96-9347-1aee9dacce23] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [84c6ebdb-9be3-4b96-9347-1aee9dacce23] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003166022s
addons_test.go:938: (dbg) Run:  kubectl --context addons-193618 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 ssh "cat /opt/local-path-provisioner/pvc-f414f75f-6cff-44fa-89fd-a446557abdb2_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-193618 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-193618 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5mdqb" [aefa91be-a5e1-48f3-a1b2-2499c4661d89] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003952103s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-193618
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-wlb7t" [25c698c2-aa25-4dd3-bf32-9c0c960ba866] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003990624s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-193618 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 addons disable yakd --alsologtostderr -v=1: (5.663437479s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-193618
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-193618: (11.125264247s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-193618
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-193618
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-193618
--- PASS: TestAddons/StoppedEnableDisable (11.40s)

                                                
                                    
x
+
TestCertOptions (37.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-045884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-045884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.207327742s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-045884 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-045884 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-045884 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-045884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-045884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-045884: (2.08480613s)
--- PASS: TestCertOptions (37.93s)

                                                
                                    
x
+
TestCertExpiration (256.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119979 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0923 11:12:29.535757    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119979 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.589494098s)
E0923 11:13:07.491649    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119979 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0923 11:15:53.886110    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:16:10.560665    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119979 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.098597967s)
helpers_test.go:175: Cleaning up "cert-expiration-119979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-119979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-119979: (2.721759611s)
--- PASS: TestCertExpiration (256.41s)

                                                
                                    
x
+
TestDockerFlags (41.66s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-742426 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-742426 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.397664317s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-742426 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-742426 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-742426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-742426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-742426: (2.33191055s)
--- PASS: TestDockerFlags (41.66s)

                                                
                                    
x
+
TestForceSystemdFlag (43.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-930062 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-930062 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.567217724s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-930062 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-930062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-930062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-930062: (2.434811219s)
--- PASS: TestForceSystemdFlag (43.45s)

                                                
                                    
x
+
TestForceSystemdEnv (43.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-704073 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-704073 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.235931646s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-704073 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-704073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-704073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-704073: (2.453934313s)
--- PASS: TestForceSystemdEnv (43.17s)

                                                
                                    
x
+
TestErrorSpam/setup (30.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-637586 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-637586 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-637586 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-637586 --driver=docker  --container-runtime=docker: (30.557803518s)
--- PASS: TestErrorSpam/setup (30.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 stop: (1.801093222s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-637586 --log_dir /tmp/nospam-637586 stop
--- PASS: TestErrorSpam/stop (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-2206/.minikube/files/etc/test/nested/copy/7519/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-716711 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (44.710267789s)
--- PASS: TestFunctional/serial/StartWithProxy (44.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:36:39.229296    7519 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-716711 --alsologtostderr -v=8: (29.838890116s)
functional_test.go:663: soft start took 29.845587317s for "functional-716711" cluster.
I0923 10:37:09.068528    7519 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-716711 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:3.1: (1.147227584s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:3.3: (1.057883684s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 cache add registry.k8s.io/pause:latest: (1.035637875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-716711 /tmp/TestFunctionalserialCacheCmdcacheadd_local2390141802/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache add minikube-local-cache-test:functional-716711
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache delete minikube-local-cache-test:functional-716711
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-716711
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.098635ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 kubectl -- --context functional-716711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-716711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-716711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.189233144s)
functional_test.go:761: restart took 42.189370014s for "functional-716711" cluster.
I0923 10:37:58.095099    7519 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-716711 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 logs: (1.192178376s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 logs --file /tmp/TestFunctionalserialLogsFileCmd4083297613/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 logs --file /tmp/TestFunctionalserialLogsFileCmd4083297613/001/logs.txt: (1.516126757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-716711 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-716711
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-716711: exit status 115 (398.308073ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30183 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-716711 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-716711 delete -f testdata/invalidsvc.yaml: (1.16677316s)
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 config get cpus: exit status 14 (65.118224ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 config get cpus: exit status 14 (74.941223ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-716711 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-716711 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48755: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-716711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (283.350234ms)

                                                
                                                
-- stdout --
	* [functional-716711] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:38:42.027724   48425 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:38:42.027996   48425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:38:42.028031   48425 out.go:358] Setting ErrFile to fd 2...
	I0923 10:38:42.028090   48425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:38:42.028388   48425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:38:42.028859   48425 out.go:352] Setting JSON to false
	I0923 10:38:42.030123   48425 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1270,"bootTime":1727086652,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 10:38:42.030330   48425 start.go:139] virtualization:  
	I0923 10:38:42.033098   48425 out.go:177] * [functional-716711] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 10:38:42.035248   48425 notify.go:220] Checking for updates...
	I0923 10:38:42.035878   48425 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:38:42.040199   48425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:38:42.043154   48425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:38:42.046441   48425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	I0923 10:38:42.048990   48425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:38:42.056124   48425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:38:42.061458   48425 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:38:42.062125   48425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:38:42.105057   48425 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:38:42.105211   48425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:38:42.216013   48425 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:38:42.204858544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:38:42.216133   48425 docker.go:318] overlay module found
	I0923 10:38:42.219952   48425 out.go:177] * Using the docker driver based on existing profile
	I0923 10:38:42.230209   48425 start.go:297] selected driver: docker
	I0923 10:38:42.230250   48425 start.go:901] validating driver "docker" against &{Name:functional-716711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716711 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:38:42.230417   48425 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:38:42.233418   48425 out.go:201] 
	W0923 10:38:42.235748   48425 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:38:42.238121   48425 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-716711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-716711 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (247.208853ms)

                                                
                                                
-- stdout --
	* [functional-716711] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:38:41.786278   48374 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:38:41.786502   48374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:38:41.786530   48374 out.go:358] Setting ErrFile to fd 2...
	I0923 10:38:41.786548   48374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:38:41.787547   48374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:38:41.788086   48374 out.go:352] Setting JSON to false
	I0923 10:38:41.789373   48374 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1270,"bootTime":1727086652,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 10:38:41.789468   48374 start.go:139] virtualization:  
	I0923 10:38:41.791938   48374 out.go:177] * [functional-716711] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 10:38:41.794127   48374 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:38:41.794181   48374 notify.go:220] Checking for updates...
	I0923 10:38:41.796130   48374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:38:41.798977   48374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	I0923 10:38:41.802514   48374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	I0923 10:38:41.808309   48374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 10:38:41.809983   48374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:38:41.813126   48374 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:38:41.813713   48374 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:38:41.848314   48374 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:38:41.848468   48374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:38:41.944600   48374 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 10:38:41.932916401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:38:41.944716   48374 docker.go:318] overlay module found
	I0923 10:38:41.947534   48374 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 10:38:41.949440   48374 start.go:297] selected driver: docker
	I0923 10:38:41.949457   48374 start.go:901] validating driver "docker" against &{Name:functional-716711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716711 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:38:41.949574   48374 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:38:41.951816   48374 out.go:201] 
	W0923 10:38:41.953903   48374 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:38:41.955752   48374 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-716711 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-716711 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-5rh7p" [11defbc0-ef64-414d-8d45-2c4d668d3b6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-5rh7p" [11defbc0-ef64-414d-8d45-2c4d668d3b6b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007158753s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32737
functional_test.go:1675: http://192.168.49.2:32737: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-5rh7p

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32737
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7a520991-8598-4440-a16b-be14f4c8406f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003652587s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-716711 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-716711 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-716711 get pvc myclaim -o=json
I0923 10:38:14.295088    7519 retry.go:31] will retry after 1.356170172s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:6cabb58f-61d2-4eee-8c5b-69f1e5d16630 ResourceVersion:662 Generation:0 CreationTimestamp:2024-09-23 10:38:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x400073a200 VolumeMode:0x400073a280 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-716711 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-716711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5b449afa-02f5-4a12-a417-b777bb172344] Pending
helpers_test.go:344: "sp-pod" [5b449afa-02f5-4a12-a417-b777bb172344] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5b449afa-02f5-4a12-a417-b777bb172344] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004004924s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-716711 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-716711 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-716711 delete -f testdata/storage-provisioner/pod.yaml: (1.052669349s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-716711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7e3e18bd-e746-473a-bb53-4b0eede33441] Pending
helpers_test.go:344: "sp-pod" [7e3e18bd-e746-473a-bb53-4b0eede33441] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7e3e18bd-e746-473a-bb53-4b0eede33441] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002830571s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-716711 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh -n functional-716711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cp functional-716711:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2601029307/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh -n functional-716711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh -n functional-716711 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7519/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /etc/test/nested/copy/7519/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7519.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /etc/ssl/certs/7519.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7519.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /usr/share/ca-certificates/7519.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /etc/ssl/certs/75192.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /usr/share/ca-certificates/75192.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-716711 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh "sudo systemctl is-active crio": exit status 1 (342.586189ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45612: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-716711 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [46c3745c-5d4c-4685-8be6-b6d1ae34db57] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [46c3745c-5d4c-4685-8be6-b6d1ae34db57] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.006511561s
I0923 10:38:16.932069    7519 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-716711 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.169.202 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-716711 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-716711 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-716711 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-vkjnl" [d7a95059-7509-41e9-a744-1bc452b61256] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-vkjnl" [d7a95059-7509-41e9-a744-1bc452b61256] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003812438s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service list -o json
functional_test.go:1494: Took "521.543538ms" to run "out/minikube-linux-arm64 -p functional-716711 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30230
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "447.171838ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "83.041578ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30230
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "512.101234ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "144.290735ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdany-port2319550006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727087919993608814" to /tmp/TestFunctionalparallelMountCmdany-port2319550006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727087919993608814" to /tmp/TestFunctionalparallelMountCmdany-port2319550006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727087919993608814" to /tmp/TestFunctionalparallelMountCmdany-port2319550006/001/test-1727087919993608814
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (501.151827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:38:40.495027    7519 retry.go:31] will retry after 259.805269ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 10:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 10:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 10:38 test-1727087919993608814
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh cat /mount-9p/test-1727087919993608814
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-716711 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0ce8ea51-116c-4369-861a-906c7d33d2e7] Pending
helpers_test.go:344: "busybox-mount" [0ce8ea51-116c-4369-861a-906c7d33d2e7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0ce8ea51-116c-4369-861a-906c7d33d2e7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0ce8ea51-116c-4369-861a-906c7d33d2e7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004172256s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-716711 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdany-port2319550006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdspecific-port616371713/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (530.099411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:38:48.790116    7519 retry.go:31] will retry after 469.549437ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdspecific-port616371713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh "sudo umount -f /mount-9p": exit status 1 (341.737405ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-716711 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdspecific-port616371713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T" /mount1: exit status 1 (982.056905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:38:51.466908    7519 retry.go:31] will retry after 606.381351ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T" /mount1
2024/09/23 10:38:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-716711 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-716711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3646858854/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 version -o=json --components: (1.121550445s)
--- PASS: TestFunctional/parallel/Version/components (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-716711 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-716711
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-716711
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-716711 image ls --format short --alsologtostderr:
I0923 10:38:59.179839   51661 out.go:345] Setting OutFile to fd 1 ...
I0923 10:38:59.179999   51661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.180010   51661 out.go:358] Setting ErrFile to fd 2...
I0923 10:38:59.180016   51661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.180309   51661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:38:59.181021   51661 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.181186   51661 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.181702   51661 cli_runner.go:164] Run: docker container inspect functional-716711 --format={{.State.Status}}
I0923 10:38:59.222911   51661 ssh_runner.go:195] Run: systemctl --version
I0923 10:38:59.223000   51661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716711
I0923 10:38:59.255899   51661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/functional-716711/id_rsa Username:docker}
I0923 10:38:59.358105   51661 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-716711 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-716711 | 7f6dd58ff0bb9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kicbase/echo-server               | functional-716711 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-716711 image ls --format table --alsologtostderr:
I0923 10:38:59.770407   51819 out.go:345] Setting OutFile to fd 1 ...
I0923 10:38:59.770565   51819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.770572   51819 out.go:358] Setting ErrFile to fd 2...
I0923 10:38:59.770578   51819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.770810   51819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:38:59.771422   51819 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.771546   51819 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.772040   51819 cli_runner.go:164] Run: docker container inspect functional-716711 --format={{.State.Status}}
I0923 10:38:59.791148   51819 ssh_runner.go:195] Run: systemctl --version
I0923 10:38:59.791211   51819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716711
I0923 10:38:59.822325   51819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/functional-716711/id_rsa Username:docker}
I0923 10:38:59.917349   51819 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-716711 image ls --format json --alsologtostderr:
[{"id":"7f6dd58ff0bb9a08965fe911efee207fcdb575eb5fe1ba2095ce3d72be14c70d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-716711"],"size":"30"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"24a140c548c07
5e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags"
:["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-716711"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"si
ze":"60200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-716711 image ls --format json --alsologtostderr:
I0923 10:38:59.492362   51731 out.go:345] Setting OutFile to fd 1 ...
I0923 10:38:59.492572   51731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.492584   51731 out.go:358] Setting ErrFile to fd 2...
I0923 10:38:59.492590   51731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.492861   51731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:38:59.494132   51731 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.494399   51731 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.494998   51731 cli_runner.go:164] Run: docker container inspect functional-716711 --format={{.State.Status}}
I0923 10:38:59.523192   51731 ssh_runner.go:195] Run: systemctl --version
I0923 10:38:59.523257   51731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716711
I0923 10:38:59.564657   51731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/functional-716711/id_rsa Username:docker}
I0923 10:38:59.661701   51731 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-716711 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7f6dd58ff0bb9a08965fe911efee207fcdb575eb5fe1ba2095ce3d72be14c70d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-716711
size: "30"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-716711
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-716711 image ls --format yaml --alsologtostderr:
I0923 10:38:59.177404   51662 out.go:345] Setting OutFile to fd 1 ...
I0923 10:38:59.177623   51662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.177654   51662 out.go:358] Setting ErrFile to fd 2...
I0923 10:38:59.177679   51662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.177945   51662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:38:59.178642   51662 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.178800   51662 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.179310   51662 cli_runner.go:164] Run: docker container inspect functional-716711 --format={{.State.Status}}
I0923 10:38:59.198203   51662 ssh_runner.go:195] Run: systemctl --version
I0923 10:38:59.198256   51662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716711
I0923 10:38:59.240804   51662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/functional-716711/id_rsa Username:docker}
I0923 10:38:59.333354   51662 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-716711 ssh pgrep buildkitd: exit status 1 (350.781089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image build -t localhost/my-image:functional-716711 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-716711 image build -t localhost/my-image:functional-716711 testdata/build --alsologtostderr: (2.987842877s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-716711 image build -t localhost/my-image:functional-716711 testdata/build --alsologtostderr:
I0923 10:38:59.797845   51824 out.go:345] Setting OutFile to fd 1 ...
I0923 10:38:59.798083   51824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.798110   51824 out.go:358] Setting ErrFile to fd 2...
I0923 10:38:59.798133   51824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:38:59.798389   51824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:38:59.799094   51824 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.799772   51824 config.go:182] Loaded profile config "functional-716711": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:38:59.800347   51824 cli_runner.go:164] Run: docker container inspect functional-716711 --format={{.State.Status}}
I0923 10:38:59.821000   51824 ssh_runner.go:195] Run: systemctl --version
I0923 10:38:59.821054   51824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716711
I0923 10:38:59.845951   51824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/functional-716711/id_rsa Username:docker}
I0923 10:38:59.942769   51824 build_images.go:161] Building image from path: /tmp/build.2040670560.tar
I0923 10:38:59.942829   51824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:38:59.956534   51824 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2040670560.tar
I0923 10:38:59.960255   51824 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2040670560.tar: stat -c "%s %y" /var/lib/minikube/build/build.2040670560.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2040670560.tar': No such file or directory
I0923 10:38:59.960284   51824 ssh_runner.go:362] scp /tmp/build.2040670560.tar --> /var/lib/minikube/build/build.2040670560.tar (3072 bytes)
I0923 10:38:59.986081   51824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2040670560
I0923 10:38:59.994944   51824 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2040670560 -xf /var/lib/minikube/build/build.2040670560.tar
I0923 10:39:00.017321   51824 docker.go:360] Building image: /var/lib/minikube/build/build.2040670560
I0923 10:39:00.017408   51824 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-716711 /var/lib/minikube/build/build.2040670560
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:09e46b85793674d29432c412b251f00f8b84106d093012dbba51bb83365717aa done
#8 naming to localhost/my-image:functional-716711 done
#8 DONE 0.1s
I0923 10:39:02.673260   51824 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-716711 /var/lib/minikube/build/build.2040670560: (2.655828084s)
I0923 10:39:02.673325   51824 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2040670560
I0923 10:39:02.685062   51824 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2040670560.tar
I0923 10:39:02.694970   51824 build_images.go:217] Built localhost/my-image:functional-716711 from /tmp/build.2040670560.tar
I0923 10:39:02.695002   51824 build_images.go:133] succeeded building to: functional-716711
I0923 10:39:02.695007   51824 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-716711
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-716711 docker-env) && out/minikube-linux-arm64 status -p functional-716711"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-716711 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image load --daemon kicbase/echo-server:functional-716711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image load --daemon kicbase/echo-server:functional-716711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-716711
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image load --daemon kicbase/echo-server:functional-716711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image save kicbase/echo-server:functional-716711 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image rm kicbase/echo-server:functional-716711 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-716711
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-716711 image save --daemon kicbase/echo-server:functional-716711 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-716711
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-716711
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-716711
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-716711
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (119.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767207 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 10:39:26.470012    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.476403    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.487766    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.509143    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.550479    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.631856    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:26.793405    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:27.115033    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:27.757132    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:29.039381    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:31.601419    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:36.722979    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:46.964646    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:07.445946    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:48.407706    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-767207 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m59.059629257s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (119.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-767207 -- rollout status deployment/busybox: (4.549990375s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-5kq6m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-9vst6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-ftd9k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-5kq6m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-9vst6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-ftd9k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-5kq6m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-9vst6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-ftd9k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-5kq6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-5kq6m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-9vst6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-9vst6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-ftd9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767207 -- exec busybox-7dff88458-ftd9k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-767207 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-767207 -v=7 --alsologtostderr: (23.011915417s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr: (1.022208534s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-767207 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.037056268s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 status --output json -v=7 --alsologtostderr: (1.088371796s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp testdata/cp-test.txt ha-767207:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567252446/001/cp-test_ha-767207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207:/home/docker/cp-test.txt ha-767207-m02:/home/docker/cp-test_ha-767207_ha-767207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test_ha-767207_ha-767207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207:/home/docker/cp-test.txt ha-767207-m03:/home/docker/cp-test_ha-767207_ha-767207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test_ha-767207_ha-767207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207:/home/docker/cp-test.txt ha-767207-m04:/home/docker/cp-test_ha-767207_ha-767207-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test_ha-767207_ha-767207-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp testdata/cp-test.txt ha-767207-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567252446/001/cp-test_ha-767207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m02:/home/docker/cp-test.txt ha-767207:/home/docker/cp-test_ha-767207-m02_ha-767207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test_ha-767207-m02_ha-767207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m02:/home/docker/cp-test.txt ha-767207-m03:/home/docker/cp-test_ha-767207-m02_ha-767207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test_ha-767207-m02_ha-767207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m02:/home/docker/cp-test.txt ha-767207-m04:/home/docker/cp-test_ha-767207-m02_ha-767207-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test_ha-767207-m02_ha-767207-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp testdata/cp-test.txt ha-767207-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567252446/001/cp-test_ha-767207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m03:/home/docker/cp-test.txt ha-767207:/home/docker/cp-test_ha-767207-m03_ha-767207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test_ha-767207-m03_ha-767207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m03:/home/docker/cp-test.txt ha-767207-m02:/home/docker/cp-test_ha-767207-m03_ha-767207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test_ha-767207-m03_ha-767207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m03:/home/docker/cp-test.txt ha-767207-m04:/home/docker/cp-test_ha-767207-m03_ha-767207-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test_ha-767207-m03_ha-767207-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp testdata/cp-test.txt ha-767207-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567252446/001/cp-test_ha-767207-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m04:/home/docker/cp-test.txt ha-767207:/home/docker/cp-test_ha-767207-m04_ha-767207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207 "sudo cat /home/docker/cp-test_ha-767207-m04_ha-767207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m04:/home/docker/cp-test.txt ha-767207-m02:/home/docker/cp-test_ha-767207-m04_ha-767207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m02 "sudo cat /home/docker/cp-test_ha-767207-m04_ha-767207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 cp ha-767207-m04:/home/docker/cp-test.txt ha-767207-m03:/home/docker/cp-test_ha-767207-m04_ha-767207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 ssh -n ha-767207-m03 "sudo cat /home/docker/cp-test_ha-767207-m04_ha-767207-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 node stop m02 -v=7 --alsologtostderr
E0923 10:42:10.330365    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 node stop m02 -v=7 --alsologtostderr: (11.015798753s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr: exit status 7 (769.700529ms)

                                                
                                                
-- stdout --
	ha-767207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767207-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767207-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767207-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:42:10.516242   74031 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:42:10.516402   74031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:42:10.516453   74031 out.go:358] Setting ErrFile to fd 2...
	I0923 10:42:10.516467   74031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:42:10.516733   74031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:42:10.516987   74031 out.go:352] Setting JSON to false
	I0923 10:42:10.517039   74031 mustload.go:65] Loading cluster: ha-767207
	I0923 10:42:10.517164   74031 notify.go:220] Checking for updates...
	I0923 10:42:10.517578   74031 config.go:182] Loaded profile config "ha-767207": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:42:10.517603   74031 status.go:174] checking status of ha-767207 ...
	I0923 10:42:10.518513   74031 cli_runner.go:164] Run: docker container inspect ha-767207 --format={{.State.Status}}
	I0923 10:42:10.541319   74031 status.go:364] ha-767207 host status = "Running" (err=<nil>)
	I0923 10:42:10.541341   74031 host.go:66] Checking if "ha-767207" exists ...
	I0923 10:42:10.541861   74031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767207
	I0923 10:42:10.577539   74031 host.go:66] Checking if "ha-767207" exists ...
	I0923 10:42:10.577915   74031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:10.578009   74031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767207
	I0923 10:42:10.603429   74031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/ha-767207/id_rsa Username:docker}
	I0923 10:42:10.698353   74031 ssh_runner.go:195] Run: systemctl --version
	I0923 10:42:10.703170   74031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:10.717466   74031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:42:10.776717   74031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 10:42:10.765591408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:42:10.777419   74031 kubeconfig.go:125] found "ha-767207" server: "https://192.168.49.254:8443"
	I0923 10:42:10.777458   74031 api_server.go:166] Checking apiserver status ...
	I0923 10:42:10.777511   74031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:42:10.790794   74031 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2338/cgroup
	I0923 10:42:10.801885   74031 api_server.go:182] apiserver freezer: "12:freezer:/docker/8c05fd74ce3cc94bbbe83e2f5a8a07bb841ef081c3742ba4cb07594f4266c6a4/kubepods/burstable/pod3f76a8c42993de6fb38367e08114b4f1/628f2e1d6f8b5f475d48141bc4322b9c587c37be3a990df38e5e985a35a5163b"
	I0923 10:42:10.801960   74031 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c05fd74ce3cc94bbbe83e2f5a8a07bb841ef081c3742ba4cb07594f4266c6a4/kubepods/burstable/pod3f76a8c42993de6fb38367e08114b4f1/628f2e1d6f8b5f475d48141bc4322b9c587c37be3a990df38e5e985a35a5163b/freezer.state
	I0923 10:42:10.811314   74031 api_server.go:204] freezer state: "THAWED"
	I0923 10:42:10.811340   74031 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:42:10.819401   74031 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:42:10.819432   74031 status.go:456] ha-767207 apiserver status = Running (err=<nil>)
	I0923 10:42:10.819444   74031 status.go:176] ha-767207 status: &{Name:ha-767207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:10.819460   74031 status.go:174] checking status of ha-767207-m02 ...
	I0923 10:42:10.819769   74031 cli_runner.go:164] Run: docker container inspect ha-767207-m02 --format={{.State.Status}}
	I0923 10:42:10.837592   74031 status.go:364] ha-767207-m02 host status = "Stopped" (err=<nil>)
	I0923 10:42:10.837627   74031 status.go:377] host is not running, skipping remaining checks
	I0923 10:42:10.837635   74031 status.go:176] ha-767207-m02 status: &{Name:ha-767207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:10.837654   74031 status.go:174] checking status of ha-767207-m03 ...
	I0923 10:42:10.837991   74031 cli_runner.go:164] Run: docker container inspect ha-767207-m03 --format={{.State.Status}}
	I0923 10:42:10.855557   74031 status.go:364] ha-767207-m03 host status = "Running" (err=<nil>)
	I0923 10:42:10.855582   74031 host.go:66] Checking if "ha-767207-m03" exists ...
	I0923 10:42:10.855903   74031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767207-m03
	I0923 10:42:10.876869   74031 host.go:66] Checking if "ha-767207-m03" exists ...
	I0923 10:42:10.877227   74031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:10.877275   74031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767207-m03
	I0923 10:42:10.895259   74031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/ha-767207-m03/id_rsa Username:docker}
	I0923 10:42:10.990523   74031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:11.003429   74031 kubeconfig.go:125] found "ha-767207" server: "https://192.168.49.254:8443"
	I0923 10:42:11.003460   74031 api_server.go:166] Checking apiserver status ...
	I0923 10:42:11.003506   74031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:42:11.017557   74031 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2149/cgroup
	I0923 10:42:11.027896   74031 api_server.go:182] apiserver freezer: "12:freezer:/docker/be548745cf1d4bb032f179e2ef4f00cc3dda017ec7993461710610d320a1299d/kubepods/burstable/poda9f4310c541a16513eba20a1db7d1f6e/1da54357b47fdceb2b45a99e98814e718423c7dae8729509587f63104c52b2e7"
	I0923 10:42:11.028049   74031 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/be548745cf1d4bb032f179e2ef4f00cc3dda017ec7993461710610d320a1299d/kubepods/burstable/poda9f4310c541a16513eba20a1db7d1f6e/1da54357b47fdceb2b45a99e98814e718423c7dae8729509587f63104c52b2e7/freezer.state
	I0923 10:42:11.041222   74031 api_server.go:204] freezer state: "THAWED"
	I0923 10:42:11.041264   74031 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:42:11.049598   74031 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:42:11.049682   74031 status.go:456] ha-767207-m03 apiserver status = Running (err=<nil>)
	I0923 10:42:11.049698   74031 status.go:176] ha-767207-m03 status: &{Name:ha-767207-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:42:11.049715   74031 status.go:174] checking status of ha-767207-m04 ...
	I0923 10:42:11.050038   74031 cli_runner.go:164] Run: docker container inspect ha-767207-m04 --format={{.State.Status}}
	I0923 10:42:11.068836   74031 status.go:364] ha-767207-m04 host status = "Running" (err=<nil>)
	I0923 10:42:11.068862   74031 host.go:66] Checking if "ha-767207-m04" exists ...
	I0923 10:42:11.069272   74031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767207-m04
	I0923 10:42:11.087359   74031 host.go:66] Checking if "ha-767207-m04" exists ...
	I0923 10:42:11.087671   74031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:42:11.087732   74031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767207-m04
	I0923 10:42:11.106932   74031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/ha-767207-m04/id_rsa Username:docker}
	I0923 10:42:11.203010   74031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:42:11.215230   74031 status.go:176] ha-767207-m04 status: &{Name:ha-767207-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 node start m02 -v=7 --alsologtostderr: (52.885797066s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (53.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (246.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-767207 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-767207 -v=7 --alsologtostderr
E0923 10:43:07.491950    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.498317    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.509712    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.531735    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.573191    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.654573    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:07.816086    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:08.137818    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:08.779075    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:10.060401    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:12.621815    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:17.743401    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:43:27.985400    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-767207 -v=7 --alsologtostderr: (34.121161567s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767207 --wait=true -v=7 --alsologtostderr
E0923 10:43:48.467542    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:26.469694    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:29.429963    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:54.172582    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:45:51.354590    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-767207 --wait=true -v=7 --alsologtostderr: (3m32.55635374s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-767207
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (246.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 node delete m03 -v=7 --alsologtostderr: (10.513700631s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 stop -v=7 --alsologtostderr: (32.619412461s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr: exit status 7 (123.529404ms)

                                                
                                                
-- stdout --
	ha-767207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767207-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767207-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:47:58.779295  101705 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:47:58.779496  101705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:47:58.779523  101705 out.go:358] Setting ErrFile to fd 2...
	I0923 10:47:58.779543  101705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:47:58.779817  101705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:47:58.780043  101705 out.go:352] Setting JSON to false
	I0923 10:47:58.780105  101705 mustload.go:65] Loading cluster: ha-767207
	I0923 10:47:58.780194  101705 notify.go:220] Checking for updates...
	I0923 10:47:58.780682  101705 config.go:182] Loaded profile config "ha-767207": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:47:58.780725  101705 status.go:174] checking status of ha-767207 ...
	I0923 10:47:58.781609  101705 cli_runner.go:164] Run: docker container inspect ha-767207 --format={{.State.Status}}
	I0923 10:47:58.801844  101705 status.go:364] ha-767207 host status = "Stopped" (err=<nil>)
	I0923 10:47:58.801887  101705 status.go:377] host is not running, skipping remaining checks
	I0923 10:47:58.801894  101705 status.go:176] ha-767207 status: &{Name:ha-767207 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:47:58.801934  101705 status.go:174] checking status of ha-767207-m02 ...
	I0923 10:47:58.802274  101705 cli_runner.go:164] Run: docker container inspect ha-767207-m02 --format={{.State.Status}}
	I0923 10:47:58.836920  101705 status.go:364] ha-767207-m02 host status = "Stopped" (err=<nil>)
	I0923 10:47:58.836961  101705 status.go:377] host is not running, skipping remaining checks
	I0923 10:47:58.836969  101705 status.go:176] ha-767207-m02 status: &{Name:ha-767207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:47:58.836996  101705 status.go:174] checking status of ha-767207-m04 ...
	I0923 10:47:58.837291  101705 cli_runner.go:164] Run: docker container inspect ha-767207-m04 --format={{.State.Status}}
	I0923 10:47:58.855015  101705 status.go:364] ha-767207-m04 host status = "Stopped" (err=<nil>)
	I0923 10:47:58.855038  101705 status.go:377] host is not running, skipping remaining checks
	I0923 10:47:58.855045  101705 status.go:176] ha-767207-m04 status: &{Name:ha-767207-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (88.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767207 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 10:48:07.491407    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:48:35.196921    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:49:26.470312    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-767207 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m27.628910862s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (88.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-767207 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-767207 --control-plane -v=7 --alsologtostderr: (43.609903134s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-767207 status -v=7 --alsologtostderr: (1.011056177s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.011966372s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-021639 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-021639 --driver=docker  --container-runtime=docker: (34.506493461s)
--- PASS: TestImageBuild/serial/Setup (34.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-021639
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-021639: (2.013657641s)
--- PASS: TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-021639
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-021639: (1.37619159s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-021639
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-021639: (1.020238291s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-021639
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-203175 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-203175 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.824354039s)
--- PASS: TestJSONOutput/start/Command (41.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.15s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-203175 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 pause -p json-output-203175 --output=json --user=testUser: (1.148740391s)
--- PASS: TestJSONOutput/pause/Command (1.15s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-203175 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-203175 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-203175 --output=json --user=testUser: (10.949583471s)
--- PASS: TestJSONOutput/stop/Command (10.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-713384 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-713384 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.100305ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fdb12366-e783-4bbc-9878-5c00d56a25be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-713384] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dad32e36-066e-408b-914b-d7500b91c71f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"a1de63d5-96fb-49ec-b0c9-0e19a7940d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5c38446-3f19-49da-9aaa-cddee20b2df4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig"}}
	{"specversion":"1.0","id":"a92b61b6-29aa-448b-8661-ce1106aa351a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube"}}
	{"specversion":"1.0","id":"046b199c-a786-4071-9012-a905994a700a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6cf365ed-37aa-4a59-8f42-feea422e3add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c94b240e-a6c0-4b92-846e-30182ce1fe7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-713384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-713384
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-937836 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-937836 --network=: (30.10573357s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-937836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-937836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-937836: (2.084394899s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-238609 --network=bridge
E0923 10:53:07.491643    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-238609 --network=bridge: (35.625820207s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-238609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-238609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-238609: (2.009874465s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.65s)

                                                
                                    
x
+
TestKicExistingNetwork (37.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 10:53:11.321198    7519 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 10:53:11.335783    7519 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 10:53:11.335866    7519 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 10:53:11.335884    7519 cli_runner.go:164] Run: docker network inspect existing-network
W0923 10:53:11.352489    7519 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 10:53:11.352536    7519 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 10:53:11.352554    7519 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 10:53:11.352665    7519 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:53:11.369691    7519 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-834f6021d60a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ff:92:9c:f0} reservation:<nil>}
I0923 10:53:11.370004    7519 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40015b6f50}
I0923 10:53:11.370028    7519 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 10:53:11.370076    7519 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 10:53:11.446248    7519 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-270349 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-270349 --network=existing-network: (35.592938383s)
helpers_test.go:175: Cleaning up "existing-network-270349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-270349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-270349: (1.947395102s)
I0923 10:53:49.003183    7519 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.70s)

                                                
                                    
x
+
TestKicCustomSubnet (33.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-038984 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-038984 --subnet=192.168.60.0/24: (31.047201044s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-038984 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-038984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-038984
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-038984: (2.062851296s)
--- PASS: TestKicCustomSubnet (33.14s)

                                                
                                    
x
+
TestKicStaticIP (32.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-634181 --static-ip=192.168.200.200
E0923 10:54:26.470330    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-634181 --static-ip=192.168.200.200: (30.461918907s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-634181 ip
helpers_test.go:175: Cleaning up "static-ip-634181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-634181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-634181: (2.136138532s)
--- PASS: TestKicStaticIP (32.74s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-527227 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-527227 --driver=docker  --container-runtime=docker: (29.748339468s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-529721 --driver=docker  --container-runtime=docker
E0923 10:55:49.534090    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-529721 --driver=docker  --container-runtime=docker: (35.558017207s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-527227
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-529721
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-529721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-529721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-529721: (2.196144147s)
helpers_test.go:175: Cleaning up "first-527227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-527227
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-527227: (2.060921729s)
--- PASS: TestMinikubeProfile (71.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-176105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-176105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.732686614s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-176105 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-177804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-177804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.434665505s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-177804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-176105 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-176105 --alsologtostderr -v=5: (1.47203457s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-177804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-177804
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-177804: (1.199107857s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-177804
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-177804: (7.080933588s)
--- PASS: TestMountStart/serial/RestartStopped (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-177804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-107326 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-107326 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m14.118382468s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-107326 -- rollout status deployment/busybox: (3.824001574s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:57:56.842491    7519 retry.go:31] will retry after 1.003439387s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:57:58.023520    7519 retry.go:31] will retry after 1.921188353s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:58:00.392875    7519 retry.go:31] will retry after 2.069035764s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:58:02.608294    7519 retry.go:31] will retry after 2.127228593s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:58:04.885331    7519 retry.go:31] will retry after 7.54581698s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0923 10:58:07.491692    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:58:12.574463    7519 retry.go:31] will retry after 5.403401264s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0923 10:58:18.141430    7519 retry.go:31] will retry after 10.465555354s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-dbcjw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-s4gkh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-dbcjw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-s4gkh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-dbcjw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-s4gkh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-dbcjw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-dbcjw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-s4gkh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-107326 -- exec busybox-7dff88458-s4gkh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-107326 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-107326 -v 3 --alsologtostderr: (18.326303085s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-107326 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp testdata/cp-test.txt multinode-107326:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4242187394/001/cp-test_multinode-107326.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326:/home/docker/cp-test.txt multinode-107326-m02:/home/docker/cp-test_multinode-107326_multinode-107326-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test_multinode-107326_multinode-107326-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326:/home/docker/cp-test.txt multinode-107326-m03:/home/docker/cp-test_multinode-107326_multinode-107326-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test_multinode-107326_multinode-107326-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp testdata/cp-test.txt multinode-107326-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4242187394/001/cp-test_multinode-107326-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m02:/home/docker/cp-test.txt multinode-107326:/home/docker/cp-test_multinode-107326-m02_multinode-107326.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test_multinode-107326-m02_multinode-107326.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m02:/home/docker/cp-test.txt multinode-107326-m03:/home/docker/cp-test_multinode-107326-m02_multinode-107326-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test_multinode-107326-m02_multinode-107326-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp testdata/cp-test.txt multinode-107326-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4242187394/001/cp-test_multinode-107326-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m03:/home/docker/cp-test.txt multinode-107326:/home/docker/cp-test_multinode-107326-m03_multinode-107326.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326 "sudo cat /home/docker/cp-test_multinode-107326-m03_multinode-107326.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 cp multinode-107326-m03:/home/docker/cp-test.txt multinode-107326-m02:/home/docker/cp-test_multinode-107326-m03_multinode-107326-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 ssh -n multinode-107326-m02 "sudo cat /home/docker/cp-test_multinode-107326-m03_multinode-107326-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-107326 node stop m03: (1.218750301s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-107326 status: exit status 7 (510.137842ms)

                                                
                                                
-- stdout --
	multinode-107326
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107326-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107326-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr: exit status 7 (500.051243ms)

                                                
                                                
-- stdout --
	multinode-107326
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107326-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107326-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:59:03.116758  176365 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:59:03.116913  176365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:59:03.116925  176365 out.go:358] Setting ErrFile to fd 2...
	I0923 10:59:03.116932  176365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:59:03.117212  176365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 10:59:03.117405  176365 out.go:352] Setting JSON to false
	I0923 10:59:03.117439  176365 mustload.go:65] Loading cluster: multinode-107326
	I0923 10:59:03.117536  176365 notify.go:220] Checking for updates...
	I0923 10:59:03.117865  176365 config.go:182] Loaded profile config "multinode-107326": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:59:03.117879  176365 status.go:174] checking status of multinode-107326 ...
	I0923 10:59:03.118800  176365 cli_runner.go:164] Run: docker container inspect multinode-107326 --format={{.State.Status}}
	I0923 10:59:03.137700  176365 status.go:364] multinode-107326 host status = "Running" (err=<nil>)
	I0923 10:59:03.137762  176365 host.go:66] Checking if "multinode-107326" exists ...
	I0923 10:59:03.138084  176365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-107326
	I0923 10:59:03.157300  176365 host.go:66] Checking if "multinode-107326" exists ...
	I0923 10:59:03.157597  176365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:59:03.157658  176365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-107326
	I0923 10:59:03.185758  176365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/multinode-107326/id_rsa Username:docker}
	I0923 10:59:03.278177  176365 ssh_runner.go:195] Run: systemctl --version
	I0923 10:59:03.282469  176365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:59:03.293998  176365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:59:03.346577  176365 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 10:59:03.336014472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 10:59:03.347168  176365 kubeconfig.go:125] found "multinode-107326" server: "https://192.168.67.2:8443"
	I0923 10:59:03.347216  176365 api_server.go:166] Checking apiserver status ...
	I0923 10:59:03.347262  176365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:59:03.359477  176365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2303/cgroup
	I0923 10:59:03.369481  176365 api_server.go:182] apiserver freezer: "12:freezer:/docker/9020b99d77d81037d2797decf78fae05cc8d30cd8e550d5a2e9797ce582f8261/kubepods/burstable/pod31c99b745fb2b682c9a5743004326272/16c38f8aa65fca1cb5ee58f64f6b01303d215b15ebf352c7beb46871d33dd93e"
	I0923 10:59:03.369553  176365 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9020b99d77d81037d2797decf78fae05cc8d30cd8e550d5a2e9797ce582f8261/kubepods/burstable/pod31c99b745fb2b682c9a5743004326272/16c38f8aa65fca1cb5ee58f64f6b01303d215b15ebf352c7beb46871d33dd93e/freezer.state
	I0923 10:59:03.378809  176365 api_server.go:204] freezer state: "THAWED"
	I0923 10:59:03.378841  176365 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 10:59:03.386740  176365 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 10:59:03.386770  176365 status.go:456] multinode-107326 apiserver status = Running (err=<nil>)
	I0923 10:59:03.386780  176365 status.go:176] multinode-107326 status: &{Name:multinode-107326 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:59:03.386797  176365 status.go:174] checking status of multinode-107326-m02 ...
	I0923 10:59:03.387176  176365 cli_runner.go:164] Run: docker container inspect multinode-107326-m02 --format={{.State.Status}}
	I0923 10:59:03.403865  176365 status.go:364] multinode-107326-m02 host status = "Running" (err=<nil>)
	I0923 10:59:03.403889  176365 host.go:66] Checking if "multinode-107326-m02" exists ...
	I0923 10:59:03.404187  176365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-107326-m02
	I0923 10:59:03.424103  176365 host.go:66] Checking if "multinode-107326-m02" exists ...
	I0923 10:59:03.424431  176365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:59:03.424478  176365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-107326-m02
	I0923 10:59:03.441223  176365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/multinode-107326-m02/id_rsa Username:docker}
	I0923 10:59:03.533975  176365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:59:03.546218  176365 status.go:176] multinode-107326-m02 status: &{Name:multinode-107326-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:59:03.546254  176365 status.go:174] checking status of multinode-107326-m03 ...
	I0923 10:59:03.546556  176365 cli_runner.go:164] Run: docker container inspect multinode-107326-m03 --format={{.State.Status}}
	I0923 10:59:03.563414  176365 status.go:364] multinode-107326-m03 host status = "Stopped" (err=<nil>)
	I0923 10:59:03.563436  176365 status.go:377] host is not running, skipping remaining checks
	I0923 10:59:03.563443  176365 status.go:176] multinode-107326-m03 status: &{Name:multinode-107326-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-107326 node start m03 -v=7 --alsologtostderr: (9.943853121s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (118s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-107326
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-107326
E0923 10:59:26.469864    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:59:30.558400    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-107326: (22.645169018s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-107326 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-107326 --wait=true -v=8 --alsologtostderr: (1m35.218342164s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-107326
--- PASS: TestMultiNode/serial/RestartKeepsNodes (118.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-107326 node delete m03: (4.913045481s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-107326 stop: (21.452438014s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-107326 status: exit status 7 (93.876085ms)

                                                
                                                
-- stdout --
	multinode-107326
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-107326-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr: exit status 7 (95.175033ms)

                                                
                                                
-- stdout --
	multinode-107326
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-107326-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:01:39.425888  189996 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:01:39.426061  189996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:01:39.426088  189996 out.go:358] Setting ErrFile to fd 2...
	I0923 11:01:39.426111  189996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:01:39.426357  189996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
	I0923 11:01:39.426555  189996 out.go:352] Setting JSON to false
	I0923 11:01:39.426619  189996 mustload.go:65] Loading cluster: multinode-107326
	I0923 11:01:39.426652  189996 notify.go:220] Checking for updates...
	I0923 11:01:39.427110  189996 config.go:182] Loaded profile config "multinode-107326": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:01:39.427149  189996 status.go:174] checking status of multinode-107326 ...
	I0923 11:01:39.427714  189996 cli_runner.go:164] Run: docker container inspect multinode-107326 --format={{.State.Status}}
	I0923 11:01:39.446094  189996 status.go:364] multinode-107326 host status = "Stopped" (err=<nil>)
	I0923 11:01:39.446115  189996 status.go:377] host is not running, skipping remaining checks
	I0923 11:01:39.446122  189996 status.go:176] multinode-107326 status: &{Name:multinode-107326 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:01:39.446146  189996 status.go:174] checking status of multinode-107326-m02 ...
	I0923 11:01:39.446466  189996 cli_runner.go:164] Run: docker container inspect multinode-107326-m02 --format={{.State.Status}}
	I0923 11:01:39.471087  189996 status.go:364] multinode-107326-m02 host status = "Stopped" (err=<nil>)
	I0923 11:01:39.471112  189996 status.go:377] host is not running, skipping remaining checks
	I0923 11:01:39.471130  189996 status.go:176] multinode-107326-m02 status: &{Name:multinode-107326-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-107326 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-107326 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (52.143556141s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-107326 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-107326
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-107326-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-107326-m02 --driver=docker  --container-runtime=docker: exit status 14 (85.161407ms)

                                                
                                                
-- stdout --
	* [multinode-107326-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-107326-m02' is duplicated with machine name 'multinode-107326-m02' in profile 'multinode-107326'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-107326-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-107326-m03 --driver=docker  --container-runtime=docker: (31.88478623s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-107326
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-107326: exit status 80 (416.500875ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-107326 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-107326-m03 already exists in multinode-107326-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-107326-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-107326-m03: (2.077783887s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.51s)

                                                
                                    
x
+
TestPreload (170s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-717283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0923 11:04:26.469819    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-717283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m56.372756591s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-717283 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-717283 image pull gcr.io/k8s-minikube/busybox: (2.028694554s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-717283
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-717283: (10.860692203s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-717283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-717283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (38.013670872s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-717283 image list
helpers_test.go:175: Cleaning up "test-preload-717283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-717283
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-717283: (2.507078889s)
--- PASS: TestPreload (170.00s)

                                                
                                    
x
+
TestScheduledStopUnix (106.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-778044 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-778044 --memory=2048 --driver=docker  --container-runtime=docker: (33.474459769s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778044 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-778044 -n scheduled-stop-778044
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 11:06:34.733755    7519 retry.go:31] will retry after 130.597µs: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.734154    7519 retry.go:31] will retry after 102.541µs: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.735234    7519 retry.go:31] will retry after 120.072µs: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.736365    7519 retry.go:31] will retry after 270.493µs: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.738131    7519 retry.go:31] will retry after 524.632µs: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.739305    7519 retry.go:31] will retry after 1.135832ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.741453    7519 retry.go:31] will retry after 1.592877ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.743642    7519 retry.go:31] will retry after 2.118009ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.746849    7519 retry.go:31] will retry after 1.854678ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.749095    7519 retry.go:31] will retry after 4.006451ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.753242    7519 retry.go:31] will retry after 5.519432ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.759472    7519 retry.go:31] will retry after 12.213858ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.772716    7519 retry.go:31] will retry after 16.768439ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.789975    7519 retry.go:31] will retry after 28.85444ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
I0923 11:06:34.819209    7519 retry.go:31] will retry after 38.270539ms: open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/scheduled-stop-778044/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778044 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778044 -n scheduled-stop-778044
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778044
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778044
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-778044: exit status 7 (69.211168ms)

                                                
                                                
-- stdout --
	scheduled-stop-778044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778044 -n scheduled-stop-778044
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778044 -n scheduled-stop-778044: exit status 7 (72.531325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-778044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-778044
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-778044: (1.635503236s)
--- PASS: TestScheduledStopUnix (106.92s)

                                                
                                    
x
+
TestSkaffold (118.33s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe674610800 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-483970 --memory=2600 --driver=docker  --container-runtime=docker
E0923 11:08:07.492214    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-483970 --memory=2600 --driver=docker  --container-runtime=docker: (32.482707272s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe674610800 run --minikube-profile skaffold-483970 --kube-context skaffold-483970 --status-check=true --port-forward=false --interactive=false
E0923 11:09:26.470164    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe674610800 run --minikube-profile skaffold-483970 --kube-context skaffold-483970 --status-check=true --port-forward=false --interactive=false: (1m10.592779994s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-58d596bf87-k2877" [2f45f835-4eea-420e-83d4-8d141f149fe1] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003823862s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6b9844cb6b-4fv45" [bbc28252-51cd-4373-babf-6e8b0a55394d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003477059s
helpers_test.go:175: Cleaning up "skaffold-483970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-483970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-483970: (3.04227844s)
--- PASS: TestSkaffold (118.33s)

                                                
                                    
x
+
TestInsufficientStorage (10.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-006053 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-006053 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.536315933s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bae601c0-6d19-40a5-aa2c-0cb895f45ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-006053] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9dc29267-3e2b-40b6-be90-a7369e3efcbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"09c7bb54-7674-4502-a086-4339435e2097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e3eed1b-fe0f-4a8c-9976-b677048e6d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig"}}
	{"specversion":"1.0","id":"9699761c-2aee-430d-9737-3002c01be968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube"}}
	{"specversion":"1.0","id":"89361f22-0eb3-4f0c-8081-a43f5993ab26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8ed33671-bd47-42ab-8009-1062c556cedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"363d6a97-5135-420b-877a-6f887ab59987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0c6bb11a-cf54-46ad-bb94-ece3bd3b036d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"60945ecb-1ac7-4a52-9da1-9b7b47e5a02c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0f8f9f2-b576-4668-806d-bc3b31e37710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f837e9c6-b028-4e19-b879-9e60b82628de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-006053\" primary control-plane node in \"insufficient-storage-006053\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7abb062-3056-4898-aba7-2ffc2ec7ab90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b9fc56b-6f24-481f-ab0c-03d2d1c13779","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c263aa4-07ec-4e12-9a82-0d9e2753db58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-006053 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-006053 --output=json --layout=cluster: exit status 7 (279.335582ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-006053","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-006053","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:09:54.810558  224722 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-006053" does not appear in /home/jenkins/minikube-integration/19689-2206/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-006053 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-006053 --output=json --layout=cluster: exit status 7 (291.354259ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-006053","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-006053","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:09:55.101847  224785 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-006053" does not appear in /home/jenkins/minikube-integration/19689-2206/kubeconfig
	E0923 11:09:55.113170  224785 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/insufficient-storage-006053/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-006053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-006053
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-006053: (1.682360691s)
--- PASS: TestInsufficientStorage (10.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.196241974 start -p running-upgrade-263812 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0923 11:18:07.491643    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.196241974 start -p running-upgrade-263812 --memory=2200 --vm-driver=docker  --container-runtime=docker: (43.497807072s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-263812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-263812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.154994087s)
helpers_test.go:175: Cleaning up "running-upgrade-263812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-263812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-263812: (2.783090415s)
--- PASS: TestRunningBinaryUpgrade (84.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.563294864s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-452913
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-452913: (10.940343415s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-452913 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-452913 status --format={{.Host}}: exit status 7 (77.557044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m41.86967007s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-452913 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (119.33952ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-452913] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-452913
	    minikube start -p kubernetes-upgrade-452913 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4529132 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-452913 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-452913 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.430795108s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-452913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-452913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-452913: (2.886815681s)
--- PASS: TestKubernetesUpgrade (388.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3036067421 start -p missing-upgrade-706879 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3036067421 start -p missing-upgrade-706879 --memory=2200 --driver=docker  --container-runtime=docker: (46.165554791s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-706879
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-706879: (10.345060167s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-706879
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-706879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 11:17:15.808224    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-706879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.965166947s)
helpers_test.go:175: Cleaning up "missing-upgrade-706879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-706879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-706879: (2.586947572s)
--- PASS: TestMissingContainerUpgrade (119.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (101.055663ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-766230] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-766230 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-766230 --driver=docker  --container-runtime=docker: (42.225806171s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-766230 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --driver=docker  --container-runtime=docker: (16.071989824s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-766230 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-766230 status -o json: exit status 2 (301.203065ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-766230","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-766230
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-766230: (1.735801361s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-766230 --no-kubernetes --driver=docker  --container-runtime=docker: (9.496603007s)
--- PASS: TestNoKubernetes/serial/Start (9.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-766230 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-766230 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.893094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-766230
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-766230: (1.220205902s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-766230 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-766230 --driver=docker  --container-runtime=docker: (7.810399078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-766230 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-766230 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.353459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3458187867 start -p stopped-upgrade-095723 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0923 11:14:26.470176    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:31.947508    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:31.954586    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:31.965934    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:31.987323    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:32.028728    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:32.110062    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:32.271588    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:32.593279    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:33.235400    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:34.516739    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:37.078881    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:14:42.200356    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3458187867 start -p stopped-upgrade-095723 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m25.356624519s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3458187867 -p stopped-upgrade-095723 stop
E0923 11:14:52.442244    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3458187867 -p stopped-upgrade-095723 stop: (10.967210211s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-095723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 11:15:12.924046    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-095723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.01629822s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-095723
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-095723: (1.348060274s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestPause/serial/Start (44.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-020958 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0923 11:19:26.470201    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:31.944836    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-020958 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (44.40537584s)
--- PASS: TestPause/serial/Start (44.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-020958 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 11:19:59.649923    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-020958 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.983423233s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-020958 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.60s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-020958 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-020958 --output=json --layout=cluster: exit status 2 (334.061342ms)

                                                
                                                
-- stdout --
	{"Name":"pause-020958","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-020958","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-020958 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-020958 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-020958 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-020958 --alsologtostderr -v=5: (2.163194261s)
--- PASS: TestPause/serial/DeletePaused (2.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-020958
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-020958: exit status 1 (28.367299ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-020958: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m16.167865883s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-627100 "pgrep -a kubelet"
I0923 11:21:41.670448    7519 config.go:182] Loaded profile config "auto-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-627100 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zvszv" [dd945bff-958d-454b-b562-8697b66bd6cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zvszv" [dd945bff-958d-454b-b562-8697b66bd6cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005812089s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m14.74220505s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0923 11:23:07.492049    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m19.295526371s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ppbll" [5666eb5c-4c6b-4b52-ab88-367ab2abc8c0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005620561s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-627100 "pgrep -a kubelet"
I0923 11:23:39.538865    7519 config.go:182] Loaded profile config "kindnet-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-627100 replace --force -f testdata/netcat-deployment.yaml
I0923 11:23:39.917530    7519 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4jkw7" [b36da858-4411-42fb-a663-7131917abee5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4jkw7" [b36da858-4411-42fb-a663-7131917abee5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005234532s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8x66n" [b8206e8e-c7e1-4ea2-b2df-53315d0a7e69] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.015806777s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.057965365s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-627100 "pgrep -a kubelet"
I0923 11:24:19.069278    7519 config.go:182] Loaded profile config "calico-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-627100 replace --force -f testdata/netcat-deployment.yaml
I0923 11:24:19.389908    7519 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p82tp" [8e7713eb-1fe0-4162-bd64-fea0f6665208] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p82tp" [8e7713eb-1fe0-4162-bd64-fea0f6665208] Running
E0923 11:24:26.470407    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004247816s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0923 11:24:31.945002    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (88.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m28.245616964s)
--- PASS: TestNetworkPlugins/group/false/Start (88.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-627100 "pgrep -a kubelet"
I0923 11:25:20.050855    7519 config.go:182] Loaded profile config "custom-flannel-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-627100 replace --force -f testdata/netcat-deployment.yaml
I0923 11:25:20.391991    7519 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tjggf" [a04c27d9-36ab-430c-92fc-7fa6f3c3a007] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tjggf" [a04c27d9-36ab-430c-92fc-7fa6f3c3a007] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003735056s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m19.194185734s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-627100 "pgrep -a kubelet"
I0923 11:26:27.597312    7519 config.go:182] Loaded profile config "false-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-627100 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ddxcn" [f7cb5029-0465-4a44-bcc4-814dc404fe04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ddxcn" [f7cb5029-0465-4a44-bcc4-814dc404fe04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003948099s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0923 11:27:02.542989    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (59.383088709s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-627100 "pgrep -a kubelet"
I0923 11:27:15.027807    7519 config.go:182] Loaded profile config "enable-default-cni-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-627100 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jb8gc" [638c7f91-b728-4fc2-a93f-65d8ca5fd03c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:27:23.025737    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jb8gc" [638c7f91-b728-4fc2-a93f-65d8ca5fd03c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.010076055s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m11.706316001s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p25p5" [1f0e5735-6dc6-4b2f-9e0a-30fa02146bef] Running
E0923 11:28:03.987908    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005785481s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-627100 "pgrep -a kubelet"
I0923 11:28:05.560840    7519 config.go:182] Loaded profile config "flannel-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-627100 replace --force -f testdata/netcat-deployment.yaml
I0923 11:28:05.890998    7519 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cdmb2" [0a358e68-172c-4bbf-bc3a-024440f892dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:28:07.492252    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cdmb2" [0a358e68-172c-4bbf-bc3a-024440f892dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005809438s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (79.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0923 11:28:43.379175    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:28:53.621337    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-627100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m19.067352064s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (79.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-627100 "pgrep -a kubelet"
I0923 11:29:05.688236    7519 config.go:182] Loaded profile config "bridge-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-627100 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2db4" [1c6620cd-3e99-41a0-8990-33d092725bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:29:09.537122    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-c2db4" [1c6620cd-3e99-41a0-8990-33d092725bfc] Running
E0923 11:29:12.665947    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.672481    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.683917    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.705353    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.746823    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.828210    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:12.989474    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:13.311243    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:13.952718    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:14.103247    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:15.234282    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004140203s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-986453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0923 11:29:53.641695    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:55.065136    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-986453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m34.077373917s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-627100 "pgrep -a kubelet"
I0923 11:30:00.998153    7519 config.go:182] Loaded profile config "kubenet-627100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-627100 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cgxmg" [d9ecd1e9-64af-4c65-a76b-7151783f9637] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cgxmg" [d9ecd1e9-64af-4c65-a76b-7151783f9637] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.003795483s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-627100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-627100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0923 11:42:11.733388    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:42:11.940106    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:42:15.362294    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:42:39.644651    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:42:52.694997    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:42:59.179848    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:43:05.112732    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:43:07.491654    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:43:33.124353    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:44:06.109289    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:44:12.666592    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:44:14.617662    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:44:26.469750    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:44:31.945058    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-772325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:30:40.876761    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:30:55.011298    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:01.358206    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:16.987261    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:27.957342    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:27.963805    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:27.975157    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:27.996673    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:28.038299    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:28.119591    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:28.281252    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:28.602940    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:29.245122    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:30.531902    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-772325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (51.268922773s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772325 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dca5cf3d-f1a0-43a4-9083-7f5c378d9eb4] Pending
helpers_test.go:344: "busybox" [dca5cf3d-f1a0-43a4-9083-7f5c378d9eb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 11:31:33.095290    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [dca5cf3d-f1a0-43a4-9083-7f5c378d9eb4] Running
E0923 11:31:38.217425    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003429085s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772325 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-772325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-772325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085073871s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-772325 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-772325 --alsologtostderr -v=3
E0923 11:31:42.046282    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:42.320223    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:31:48.459016    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-772325 --alsologtostderr -v=3: (11.048942063s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772325 -n no-preload-772325
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772325 -n no-preload-772325: exit status 7 (63.602102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-772325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-772325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:31:56.525228    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:08.940318    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:09.751371    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-772325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m29.441379334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772325 -n no-preload-772325
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-986453 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cc7b77a-d1b6-4bd2-9b4a-ec0eaf72f326] Pending
helpers_test.go:344: "busybox" [7cc7b77a-d1b6-4bd2-9b4a-ec0eaf72f326] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 11:32:15.362201    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.368997    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.380325    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.402311    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.443691    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.525105    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:15.686573    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7cc7b77a-d1b6-4bd2-9b4a-ec0eaf72f326] Running
E0923 11:32:16.010745    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:16.653167    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:17.934695    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:20.497234    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003846801s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-986453 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-986453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-986453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.148946085s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-986453 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-986453 --alsologtostderr -v=3
E0923 11:32:25.619476    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-986453 --alsologtostderr -v=3: (11.090341036s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986453 -n old-k8s-version-986453
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986453 -n old-k8s-version-986453: exit status 7 (75.221598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-986453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (135.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-986453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0923 11:32:35.860898    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:49.901646    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:50.562433    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:56.342862    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.179700    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.186012    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.197319    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.218819    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.260171    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.341640    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.503001    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:32:59.824658    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:00.466432    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:01.748504    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:04.241812    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:04.310317    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:07.491813    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:09.431787    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:19.673251    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:33.124420    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:37.304183    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:33:40.154945    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:00.829411    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.109260    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.115731    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.127150    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.148632    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.190060    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.271571    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.433118    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:06.754954    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:07.397081    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:08.678949    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:11.240373    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:11.823265    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:12.666186    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:16.362208    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:21.117179    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:26.470234    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:26.603691    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:31.944997    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:40.366467    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:34:47.085244    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-986453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m15.479043169s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986453 -n old-k8s-version-986453
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (135.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fv94w" [fc0373fb-c806-46b1-9060-c363d6ff6fc4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004829284s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fv94w" [fc0373fb-c806-46b1-9060-c363d6ff6fc4] Running
E0923 11:34:59.225511    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.360240    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.366850    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.378298    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.400000    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.441429    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:01.523117    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003810683s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-986453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0923 11:35:01.684627    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-986453 image list --format=json
E0923 11:35:02.006212    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-986453 --alsologtostderr -v=1
E0923 11:35:02.649910    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986453 -n old-k8s-version-986453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986453 -n old-k8s-version-986453: exit status 2 (343.494517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986453 -n old-k8s-version-986453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986453 -n old-k8s-version-986453: exit status 2 (322.596397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-986453 --alsologtostderr -v=1
E0923 11:35:03.931412    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986453 -n old-k8s-version-986453
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986453 -n old-k8s-version-986453
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-334517 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:35:11.615340    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:20.370607    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:21.857013    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:28.047052    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:42.338488    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:43.039191    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:35:48.083187    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-334517 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m13.716986691s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-334517 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b9ec182e-a253-484c-8501-c226b2f25a77] Pending
helpers_test.go:344: "busybox" [b9ec182e-a253-484c-8501-c226b2f25a77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 11:36:23.299740    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b9ec182e-a253-484c-8501-c226b2f25a77] Running
E0923 11:36:27.956584    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005114478s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-334517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-slnjj" [98e40c97-223f-4f67-a716-fb126349322a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004241232s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-slnjj" [98e40c97-223f-4f67-a716-fb126349322a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004499397s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-772325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-334517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-334517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01446349s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-334517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-334517 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-334517 --alsologtostderr -v=3: (12.701168245s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-772325 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-772325 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772325 -n no-preload-772325
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772325 -n no-preload-772325: exit status 2 (319.891584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772325 -n no-preload-772325
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772325 -n no-preload-772325: exit status 2 (316.718314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-772325 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772325 -n no-preload-772325
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772325 -n no-preload-772325
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-502926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:36:42.045989    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-502926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (43.751592237s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-334517 -n embed-certs-334517
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-334517 -n embed-certs-334517: exit status 7 (90.045446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-334517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-334517 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:36:49.968675    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:36:55.665545    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:11.939722    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:11.947473    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:11.958788    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:11.980156    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:12.021478    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:12.102964    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:12.264424    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:12.586339    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:13.227682    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:14.509064    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:15.362430    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:17.070961    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:22.192577    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-334517 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m54.233397457s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-334517 -n embed-certs-334517
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-502926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-502926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088905677s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-502926 --alsologtostderr -v=3
E0923 11:37:32.434428    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-502926 --alsologtostderr -v=3: (11.029170095s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-502926 -n newest-cni-502926
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-502926 -n newest-cni-502926: exit status 7 (69.147988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-502926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-502926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:37:43.067044    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/enable-default-cni-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:37:45.221094    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-502926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.312266572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-502926 -n newest-cni-502926
E0923 11:37:52.918734    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-502926 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-502926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-502926 -n newest-cni-502926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-502926 -n newest-cni-502926: exit status 2 (320.953539ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-502926 -n newest-cni-502926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-502926 -n newest-cni-502926: exit status 2 (327.789417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-502926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-502926 -n newest-cni-502926
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-502926 -n newest-cni-502926
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-655998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:37:59.180189    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:38:07.491896    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/functional-716711/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:38:26.880830    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:38:33.124833    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kindnet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:38:33.880888    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:39:06.108508    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:39:12.665773    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/calico-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-655998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m14.568085675s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-655998 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7c0fe6af-140d-4434-ab6e-209088cda002] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7c0fe6af-140d-4434-ab6e-209088cda002] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003767274s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-655998 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-655998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-655998 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-655998 --alsologtostderr -v=3
E0923 11:39:26.469728    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:39:31.944380    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/skaffold-483970/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:39:33.810645    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/bridge-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-655998 --alsologtostderr -v=3: (11.101468055s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998: exit status 7 (75.182508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-655998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-655998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 11:39:55.802860    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/old-k8s-version-986453/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:40:01.359525    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:40:20.371259    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/custom-flannel-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:40:29.062464    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/kubenet-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:27.957255    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/false-627100/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.757651    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.764147    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.775511    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.796887    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.838263    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:30.919711    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:31.081305    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:31.403101    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:32.044377    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:33.326981    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:35.888687    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-655998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (5m0.299859612s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kt5pj" [c8cf4e41-0183-42c5-9bc9-0047cc33119d] Running
E0923 11:41:41.010561    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:41:42.045849    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/auto-627100/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004379272s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kt5pj" [c8cf4e41-0183-42c5-9bc9-0047cc33119d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004120053s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-334517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-334517 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-334517 --alsologtostderr -v=1
E0923 11:41:51.252034    7519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/no-preload-772325/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-334517 -n embed-certs-334517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-334517 -n embed-certs-334517: exit status 2 (329.811647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-334517 -n embed-certs-334517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-334517 -n embed-certs-334517: exit status 2 (346.466313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-334517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-334517 -n embed-certs-334517
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-334517 -n embed-certs-334517
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sdcdq" [8c896502-d2f9-468a-89ea-86cf28f91e32] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004048106s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sdcdq" [8c896502-d2f9-468a-89ea-86cf28f91e32] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005078309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-655998 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-655998 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-655998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998: exit status 2 (308.19873ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998: exit status 2 (306.30102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-655998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-655998 -n default-k8s-diff-port-655998
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-631157 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-631157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-631157
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-627100 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-627100" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 23 Sep 2024 11:10:41 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-198240
contexts:
- context:
cluster: offline-docker-198240
extensions:
- extension:
last-update: Mon, 23 Sep 2024 11:10:41 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-198240
name: offline-docker-198240
current-context: offline-docker-198240
kind: Config
preferences: {}
users:
- name: offline-docker-198240
user:
client-certificate: /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/offline-docker-198240/client.crt
client-key: /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/offline-docker-198240/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-627100

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-627100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-627100"

                                                
                                                
----------------------- debugLogs end: cilium-627100 [took: 3.888139443s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-627100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-627100
--- SKIP: TestNetworkPlugins/group/cilium (4.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-916861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-916861
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard