Test Report: Docker_Linux_docker_arm64 19700

                    
                      8b226b9d2c09f79dcc3a887682b5a6bd27a95904:2024-09-24:36357
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75.7
x
+
TestAddons/parallel/Registry (75.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.658774ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rjzd4" [87b7747e-ac02-4a01-b537-3ccd964580e8] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004411771s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hgvv7" [cbaedb6d-c887-4a40-9fe1-fd09e1825332] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004270335s
addons_test.go:338: (dbg) Run:  kubectl --context addons-706965 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-706965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-706965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.121182699s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-706965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 ip
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-706965
helpers_test.go:235: (dbg) docker inspect addons-706965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe",
	        "Created": "2024-09-24T18:20:32.863471258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-24T18:20:33.025844994Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe/hosts",
	        "LogPath": "/var/lib/docker/containers/4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe/4c08abeea2e81f4ac38b2ef184b15b770a7329cb184799081b70949e74528fbe-json.log",
	        "Name": "/addons-706965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-706965:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-706965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/61abdc51c7d37138ac0a0f84cb524af11255e5a5747955ddbdb71dbc451c4d53-init/diff:/var/lib/docker/overlay2/7cfe2bf694b7cf0e1b15852d55af611021b213fc601a3f2ee5d4cb4fdb7ca964/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61abdc51c7d37138ac0a0f84cb524af11255e5a5747955ddbdb71dbc451c4d53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61abdc51c7d37138ac0a0f84cb524af11255e5a5747955ddbdb71dbc451c4d53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61abdc51c7d37138ac0a0f84cb524af11255e5a5747955ddbdb71dbc451c4d53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-706965",
	                "Source": "/var/lib/docker/volumes/addons-706965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-706965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-706965",
	                "name.minikube.sigs.k8s.io": "addons-706965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0e3ffdf1b60cda923b10846f048c347fb3a16deba241b13f5827aae7781a2c2",
	            "SandboxKey": "/var/run/docker/netns/f0e3ffdf1b60",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-706965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d00ac3d797bfeb3d9330e59e581269664d1e57fdb82160b05082a3171c36d975",
	                    "EndpointID": "ae608a9d279281e8b697f6ecee4ce75e50ce3fe973e3a68f0f6dc6b578422da8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-706965",
	                        "4c08abeea2e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-706965 -n addons-706965
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 logs -n 25: (1.271744298s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-383349   | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p download-only-383349              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-383349              | download-only-383349   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | -o=json --download-only              | download-only-168686   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | -p download-only-168686              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-168686              | download-only-168686   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-383349              | download-only-383349   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-168686              | download-only-168686   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | --download-only -p                   | download-docker-961334 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | download-docker-961334               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-961334            | download-docker-961334 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | --download-only -p                   | binary-mirror-168589   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | binary-mirror-168589                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37169               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-168589              | binary-mirror-168589   | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p                  | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-706965                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-706965                        |                        |         |         |                     |                     |
	| start   | -p addons-706965 --wait=true         | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:23 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-706965 addons disable         | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:24 UTC | 24 Sep 24 18:24 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:32 UTC | 24 Sep 24 18:32 UTC |
	|         | -p addons-706965                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-706965 addons disable         | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:32 UTC | 24 Sep 24 18:32 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-706965 addons                 | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-706965 addons                 | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-706965 ip                     | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	| addons  | addons-706965 addons                 | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-706965 addons disable         | addons-706965          | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:09.148963    8267 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:09.149305    8267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:09.149320    8267 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:09.149325    8267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:09.149632    8267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:20:09.150246    8267 out.go:352] Setting JSON to false
	I0924 18:20:09.151010    8267 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155,"bootTime":1727201855,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0924 18:20:09.151088    8267 start.go:139] virtualization:  
	I0924 18:20:09.153121    8267 out.go:177] * [addons-706965] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:20:09.154713    8267 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:20:09.154784    8267 notify.go:220] Checking for updates...
	I0924 18:20:09.157095    8267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:09.158823    8267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:20:09.160169    8267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	I0924 18:20:09.161580    8267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:20:09.163527    8267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:20:09.165449    8267 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:09.185123    8267 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:20:09.185285    8267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:20:09.240347    8267 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 18:20:09.230775477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:20:09.240464    8267 docker.go:318] overlay module found
	I0924 18:20:09.241767    8267 out.go:177] * Using the docker driver based on user configuration
	I0924 18:20:09.242957    8267 start.go:297] selected driver: docker
	I0924 18:20:09.242972    8267 start.go:901] validating driver "docker" against <nil>
	I0924 18:20:09.242985    8267 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:20:09.243621    8267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:20:09.294074    8267 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 18:20:09.28520256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:20:09.294343    8267 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:09.294609    8267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:09.296166    8267 out.go:177] * Using Docker driver with root privileges
	I0924 18:20:09.297345    8267 cni.go:84] Creating CNI manager for ""
	I0924 18:20:09.297432    8267 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:09.297446    8267 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:09.297574    8267 start.go:340] cluster config:
	{Name:addons-706965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-706965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:09.299101    8267 out.go:177] * Starting "addons-706965" primary control-plane node in "addons-706965" cluster
	I0924 18:20:09.300435    8267 cache.go:121] Beginning downloading kic base image for docker with docker
	I0924 18:20:09.301567    8267 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:20:09.302917    8267 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:09.302975    8267 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 18:20:09.302998    8267 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:09.303008    8267 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:20:09.303084    8267 preload.go:172] Found /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 18:20:09.303094    8267 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 18:20:09.303446    8267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/config.json ...
	I0924 18:20:09.303502    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/config.json: {Name:mk5ed2c316a8540cc55defc900f241a63645d758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:09.318435    8267 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:20:09.318554    8267 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:20:09.318578    8267 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 18:20:09.318583    8267 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 18:20:09.318591    8267 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 18:20:09.318600    8267 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0924 18:20:26.043546    8267 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0924 18:20:26.043604    8267 cache.go:194] Successfully downloaded all kic artifacts
	I0924 18:20:26.043633    8267 start.go:360] acquireMachinesLock for addons-706965: {Name:mk2f7b6ba2907993fac051b7b97f69b760a34555 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:26.043755    8267 start.go:364] duration metric: took 98.722µs to acquireMachinesLock for "addons-706965"
	I0924 18:20:26.043791    8267 start.go:93] Provisioning new machine with config: &{Name:addons-706965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-706965 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 18:20:26.043867    8267 start.go:125] createHost starting for "" (driver="docker")
	I0924 18:20:26.045375    8267 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0924 18:20:26.045654    8267 start.go:159] libmachine.API.Create for "addons-706965" (driver="docker")
	I0924 18:20:26.045697    8267 client.go:168] LocalClient.Create starting
	I0924 18:20:26.045806    8267 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem
	I0924 18:20:26.608406    8267 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/cert.pem
	I0924 18:20:26.730038    8267 cli_runner.go:164] Run: docker network inspect addons-706965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0924 18:20:26.744367    8267 cli_runner.go:211] docker network inspect addons-706965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0924 18:20:26.744465    8267 network_create.go:284] running [docker network inspect addons-706965] to gather additional debugging logs...
	I0924 18:20:26.744488    8267 cli_runner.go:164] Run: docker network inspect addons-706965
	W0924 18:20:26.760403    8267 cli_runner.go:211] docker network inspect addons-706965 returned with exit code 1
	I0924 18:20:26.760432    8267 network_create.go:287] error running [docker network inspect addons-706965]: docker network inspect addons-706965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-706965 not found
	I0924 18:20:26.760445    8267 network_create.go:289] output of [docker network inspect addons-706965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-706965 not found
	
	** /stderr **
	I0924 18:20:26.760540    8267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 18:20:26.776516    8267 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b81e60}
	I0924 18:20:26.776555    8267 network_create.go:124] attempt to create docker network addons-706965 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0924 18:20:26.776611    8267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-706965 addons-706965
	I0924 18:20:26.842058    8267 network_create.go:108] docker network addons-706965 192.168.49.0/24 created
	I0924 18:20:26.842089    8267 kic.go:121] calculated static IP "192.168.49.2" for the "addons-706965" container
	I0924 18:20:26.842171    8267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0924 18:20:26.855762    8267 cli_runner.go:164] Run: docker volume create addons-706965 --label name.minikube.sigs.k8s.io=addons-706965 --label created_by.minikube.sigs.k8s.io=true
	I0924 18:20:26.871578    8267 oci.go:103] Successfully created a docker volume addons-706965
	I0924 18:20:26.871880    8267 cli_runner.go:164] Run: docker run --rm --name addons-706965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-706965 --entrypoint /usr/bin/test -v addons-706965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0924 18:20:29.054107    8267 cli_runner.go:217] Completed: docker run --rm --name addons-706965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-706965 --entrypoint /usr/bin/test -v addons-706965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.182177193s)
	I0924 18:20:29.054135    8267 oci.go:107] Successfully prepared a docker volume addons-706965
	I0924 18:20:29.054160    8267 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:29.054179    8267 kic.go:194] Starting extracting preloaded images to volume ...
	I0924 18:20:29.054248    8267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-706965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0924 18:20:32.794544    8267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-706965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.740239304s)
	I0924 18:20:32.794574    8267 kic.go:203] duration metric: took 3.740392647s to extract preloaded images to volume ...
	W0924 18:20:32.794712    8267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0924 18:20:32.794841    8267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0924 18:20:32.849668    8267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-706965 --name addons-706965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-706965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-706965 --network addons-706965 --ip 192.168.49.2 --volume addons-706965:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0924 18:20:33.191115    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Running}}
	I0924 18:20:33.216846    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:20:33.240006    8267 cli_runner.go:164] Run: docker exec addons-706965 stat /var/lib/dpkg/alternatives/iptables
	I0924 18:20:33.305713    8267 oci.go:144] the created container "addons-706965" has a running status.
	I0924 18:20:33.305741    8267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa...
	I0924 18:20:33.660412    8267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0924 18:20:33.696770    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:20:33.722862    8267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0924 18:20:33.722881    8267 kic_runner.go:114] Args: [docker exec --privileged addons-706965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0924 18:20:33.794971    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:20:33.819497    8267 machine.go:93] provisionDockerMachine start ...
	I0924 18:20:33.819583    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:33.849305    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:33.849568    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:33.849579    8267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 18:20:34.037517    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-706965
	
	I0924 18:20:34.037583    8267 ubuntu.go:169] provisioning hostname "addons-706965"
	I0924 18:20:34.037676    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:34.058492    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:34.058730    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:34.058742    8267 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-706965 && echo "addons-706965" | sudo tee /etc/hostname
	I0924 18:20:34.203967    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-706965
	
	I0924 18:20:34.204126    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:34.223251    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:34.223499    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:34.223521    8267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-706965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-706965/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-706965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:20:34.361013    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:34.361038    8267 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19700-2203/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-2203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-2203/.minikube}
	I0924 18:20:34.361057    8267 ubuntu.go:177] setting up certificates
	I0924 18:20:34.361068    8267 provision.go:84] configureAuth start
	I0924 18:20:34.361132    8267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-706965
	I0924 18:20:34.378080    8267 provision.go:143] copyHostCerts
	I0924 18:20:34.378161    8267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-2203/.minikube/ca.pem (1082 bytes)
	I0924 18:20:34.378291    8267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-2203/.minikube/cert.pem (1123 bytes)
	I0924 18:20:34.378373    8267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-2203/.minikube/key.pem (1679 bytes)
	I0924 18:20:34.378433    8267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-2203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca-key.pem org=jenkins.addons-706965 san=[127.0.0.1 192.168.49.2 addons-706965 localhost minikube]
	I0924 18:20:34.572895    8267 provision.go:177] copyRemoteCerts
	I0924 18:20:34.572958    8267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:20:34.573009    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:34.589164    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:20:34.682265    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 18:20:34.706800    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:20:34.730507    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:20:34.754407    8267 provision.go:87] duration metric: took 393.32495ms to configureAuth
	I0924 18:20:34.754434    8267 ubuntu.go:193] setting minikube options for container-runtime
	I0924 18:20:34.754623    8267 config.go:182] Loaded profile config "addons-706965": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:20:34.754685    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:34.770650    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:34.770893    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:34.770911    8267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0924 18:20:34.901477    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0924 18:20:34.901497    8267 ubuntu.go:71] root file system type: overlay
	I0924 18:20:34.901637    8267 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0924 18:20:34.901706    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:34.919001    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:34.919252    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:34.919333    8267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0924 18:20:35.059124    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0924 18:20:35.059220    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:35.079377    8267 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:35.079623    8267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0924 18:20:35.079646    8267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0924 18:20:35.834112    8267 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-24 18:20:35.054105549 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0924 18:20:35.834147    8267 machine.go:96] duration metric: took 2.014630532s to provisionDockerMachine
	I0924 18:20:35.834159    8267 client.go:171] duration metric: took 9.788452472s to LocalClient.Create
	I0924 18:20:35.834172    8267 start.go:167] duration metric: took 9.788519901s to libmachine.API.Create "addons-706965"
	I0924 18:20:35.834179    8267 start.go:293] postStartSetup for "addons-706965" (driver="docker")
	I0924 18:20:35.834195    8267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:20:35.834265    8267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:20:35.834306    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:35.851273    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:20:35.946078    8267 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:20:35.949162    8267 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 18:20:35.949198    8267 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 18:20:35.949210    8267 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 18:20:35.949217    8267 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0924 18:20:35.949231    8267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-2203/.minikube/addons for local assets ...
	I0924 18:20:35.949306    8267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-2203/.minikube/files for local assets ...
	I0924 18:20:35.949333    8267 start.go:296] duration metric: took 115.142823ms for postStartSetup
	I0924 18:20:35.949641    8267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-706965
	I0924 18:20:35.965671    8267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/config.json ...
	I0924 18:20:35.965965    8267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:20:35.966021    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:35.981992    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:20:36.078162    8267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0924 18:20:36.083064    8267 start.go:128] duration metric: took 10.039183443s to createHost
	I0924 18:20:36.083086    8267 start.go:83] releasing machines lock for "addons-706965", held for 10.039316946s
	I0924 18:20:36.083154    8267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-706965
	I0924 18:20:36.101068    8267 ssh_runner.go:195] Run: cat /version.json
	I0924 18:20:36.101126    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:36.101391    8267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:20:36.101448    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:20:36.126899    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:20:36.127731    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:20:36.408834    8267 ssh_runner.go:195] Run: systemctl --version
	I0924 18:20:36.412963    8267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 18:20:36.417343    8267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0924 18:20:36.443668    8267 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0924 18:20:36.443745    8267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:20:36.470661    8267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0924 18:20:36.470689    8267 start.go:495] detecting cgroup driver to use...
	I0924 18:20:36.470722    8267 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 18:20:36.470822    8267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:36.486827    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 18:20:36.496690    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 18:20:36.506535    8267 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 18:20:36.506646    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 18:20:36.516662    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:20:36.526420    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 18:20:36.536144    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:20:36.545861    8267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:20:36.555476    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 18:20:36.565294    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 18:20:36.575215    8267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 18:20:36.585272    8267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:20:36.593970    8267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:20:36.594038    8267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:20:36.611579    8267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:20:36.620530    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:36.712302    8267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 18:20:36.811153    8267 start.go:495] detecting cgroup driver to use...
	I0924 18:20:36.811199    8267 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 18:20:36.811271    8267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0924 18:20:36.824308    8267 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0924 18:20:36.824418    8267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 18:20:36.843680    8267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:36.863431    8267 ssh_runner.go:195] Run: which cri-dockerd
	I0924 18:20:36.866986    8267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 18:20:36.876006    8267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0924 18:20:36.893595    8267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0924 18:20:36.995331    8267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0924 18:20:37.102386    8267 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 18:20:37.102584    8267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0924 18:20:37.121034    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:37.214211    8267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 18:20:37.471267    8267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 18:20:37.484284    8267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 18:20:37.496305    8267 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0924 18:20:37.588529    8267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0924 18:20:37.671541    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:37.749263    8267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0924 18:20:37.762809    8267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 18:20:37.774054    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:37.861002    8267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0924 18:20:37.924801    8267 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 18:20:37.924916    8267 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0924 18:20:37.929452    8267 start.go:563] Will wait 60s for crictl version
	I0924 18:20:37.929520    8267 ssh_runner.go:195] Run: which crictl
	I0924 18:20:37.932980    8267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:20:37.969439    8267 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0924 18:20:37.969507    8267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 18:20:37.989725    8267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 18:20:38.017013    8267 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0924 18:20:38.017134    8267 cli_runner.go:164] Run: docker network inspect addons-706965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 18:20:38.032613    8267 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0924 18:20:38.037262    8267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:38.048454    8267 kubeadm.go:883] updating cluster {Name:addons-706965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-706965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:20:38.048597    8267 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:38.048674    8267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 18:20:38.067916    8267 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 18:20:38.067938    8267 docker.go:615] Images already preloaded, skipping extraction
	I0924 18:20:38.068004    8267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 18:20:38.086988    8267 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 18:20:38.087013    8267 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:20:38.087024    8267 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0924 18:20:38.087138    8267 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-706965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-706965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:20:38.087209    8267 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0924 18:20:38.131709    8267 cni.go:84] Creating CNI manager for ""
	I0924 18:20:38.131782    8267 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:38.131813    8267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:20:38.131861    8267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-706965 NodeName:addons-706965 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:20:38.132048    8267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-706965"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:20:38.132159    8267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:38.141017    8267 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:20:38.141107    8267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:20:38.149806    8267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:20:38.167106    8267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:20:38.184213    8267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0924 18:20:38.201675    8267 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0924 18:20:38.204870    8267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:38.215232    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:38.305401    8267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:20:38.320604    8267 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965 for IP: 192.168.49.2
	I0924 18:20:38.320622    8267 certs.go:194] generating shared ca certs ...
	I0924 18:20:38.320638    8267 certs.go:226] acquiring lock for ca certs: {Name:mk7f289dea8519c99434bde94b767796e170ff1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:38.320763    8267 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-2203/.minikube/ca.key
	I0924 18:20:38.485642    8267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-2203/.minikube/ca.crt ...
	I0924 18:20:38.485676    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/ca.crt: {Name:mk047053a6c7052102e1f8a05b1a416b993ba3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:38.485903    8267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-2203/.minikube/ca.key ...
	I0924 18:20:38.485920    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/ca.key: {Name:mk3dc56394baf6b22697e1eeae352c7ceb0e461a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:38.486023    8267 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.key
	I0924 18:20:38.732678    8267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.crt ...
	I0924 18:20:38.732708    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.crt: {Name:mkba173f9f526258ef4298e4ed6cac114df354f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:38.732883    8267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.key ...
	I0924 18:20:38.732896    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.key: {Name:mk3a7643db56842930dcdc8872ecf2dc97610aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:38.732977    8267 certs.go:256] generating profile certs ...
	I0924 18:20:38.733036    8267 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.key
	I0924 18:20:38.733053    8267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt with IP's: []
	I0924 18:20:39.325910    8267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt ...
	I0924 18:20:39.325945    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: {Name:mk4cebe930f9e3cfbc3e87b112ad682a8f23b948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:39.326138    8267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.key ...
	I0924 18:20:39.326151    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.key: {Name:mk626e7747d205f50e79bea1aeb6c9b2425f5d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:39.326236    8267 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key.35a85712
	I0924 18:20:39.326252    8267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt.35a85712 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0924 18:20:40.564169    8267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt.35a85712 ...
	I0924 18:20:40.564202    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt.35a85712: {Name:mkfdc8ddb34146da388e237122a6804a34afae66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:40.564425    8267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key.35a85712 ...
	I0924 18:20:40.564442    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key.35a85712: {Name:mk4950b54c9a1c6255b7daa76ed1f04ec4a87df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:40.564534    8267 certs.go:381] copying /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt.35a85712 -> /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt
	I0924 18:20:40.564634    8267 certs.go:385] copying /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key.35a85712 -> /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key
	I0924 18:20:40.564693    8267 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.key
	I0924 18:20:40.564714    8267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.crt with IP's: []
	I0924 18:20:41.333391    8267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.crt ...
	I0924 18:20:41.333422    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.crt: {Name:mk778bcb1642b3a5ad4fa3f4e009e24419d3331f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:41.333618    8267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.key ...
	I0924 18:20:41.333632    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.key: {Name:mk642dbd2d54b38d314c4432ac2036c53363c469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:41.333837    8267 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:20:41.333882    8267 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/ca.pem (1082 bytes)
	I0924 18:20:41.333912    8267 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:20:41.333940    8267 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-2203/.minikube/certs/key.pem (1679 bytes)
	I0924 18:20:41.334539    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:20:41.362155    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0924 18:20:41.388651    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:20:41.416562    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:20:41.441452    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 18:20:41.466226    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:20:41.490491    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:20:41.514998    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 18:20:41.539262    8267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-2203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:20:41.564283    8267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:20:41.582477    8267 ssh_runner.go:195] Run: openssl version
	I0924 18:20:41.588097    8267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:20:41.597813    8267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:41.601349    8267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:41.601423    8267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:41.608544    8267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:20:41.618030    8267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:20:41.621273    8267 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:20:41.621326    8267 kubeadm.go:392] StartCluster: {Name:addons-706965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-706965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:41.621451    8267 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 18:20:41.638244    8267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:20:41.647405    8267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:20:41.656133    8267 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0924 18:20:41.656194    8267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:20:41.665300    8267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:20:41.665323    8267 kubeadm.go:157] found existing configuration files:
	
	I0924 18:20:41.665384    8267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:20:41.674260    8267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:20:41.674354    8267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:20:41.683199    8267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:20:41.692017    8267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:20:41.692102    8267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:20:41.700876    8267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:20:41.710079    8267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:20:41.710173    8267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:20:41.718438    8267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:20:41.727553    8267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:20:41.727623    8267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:20:41.735997    8267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0924 18:20:41.779901    8267 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:20:41.780230    8267 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:20:41.802516    8267 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0924 18:20:41.802598    8267 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0924 18:20:41.802643    8267 kubeadm.go:310] OS: Linux
	I0924 18:20:41.802692    8267 kubeadm.go:310] CGROUPS_CPU: enabled
	I0924 18:20:41.802749    8267 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0924 18:20:41.802804    8267 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0924 18:20:41.802862    8267 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0924 18:20:41.802914    8267 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0924 18:20:41.802975    8267 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0924 18:20:41.803051    8267 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0924 18:20:41.803115    8267 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0924 18:20:41.803175    8267 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0924 18:20:41.858607    8267 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:20:41.858721    8267 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:20:41.858816    8267 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:20:41.870510    8267 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:20:41.872955    8267 out.go:235]   - Generating certificates and keys ...
	I0924 18:20:41.873056    8267 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:20:41.873131    8267 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:20:42.225248    8267 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:20:43.064091    8267 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:20:43.649765    8267 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:20:44.260062    8267 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:20:44.558159    8267 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:20:44.558429    8267 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-706965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 18:20:45.847002    8267 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:20:45.847134    8267 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-706965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 18:20:46.420610    8267 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:20:47.143898    8267 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:20:48.188379    8267 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:20:48.188714    8267 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:20:48.486675    8267 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:20:49.013482    8267 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:20:49.884695    8267 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:20:50.071043    8267 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:20:50.594100    8267 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:20:50.594974    8267 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:20:50.598151    8267 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:20:50.599987    8267 out.go:235]   - Booting up control plane ...
	I0924 18:20:50.600095    8267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:20:50.600176    8267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:20:50.601189    8267 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:20:50.611964    8267 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:20:50.619350    8267 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:20:50.619682    8267 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:20:50.743192    8267 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:20:50.743374    8267 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:20:51.744950    8267 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00161168s
	I0924 18:20:51.745040    8267 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:20:58.248689    8267 kubeadm.go:310] [api-check] The API server is healthy after 6.502019709s
	I0924 18:20:58.275023    8267 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:20:58.307190    8267 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:20:58.329930    8267 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:20:58.330123    8267 kubeadm.go:310] [mark-control-plane] Marking the node addons-706965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:20:58.341854    8267 kubeadm.go:310] [bootstrap-token] Using token: 1nurv4.815dnmhn5tv4zfj4
	I0924 18:20:58.343111    8267 out.go:235]   - Configuring RBAC rules ...
	I0924 18:20:58.343225    8267 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:20:58.348542    8267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:20:58.356010    8267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:20:58.359669    8267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:20:58.364183    8267 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:20:58.367888    8267 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:20:58.652946    8267 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:20:59.084878    8267 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:20:59.653765    8267 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:20:59.654919    8267 kubeadm.go:310] 
	I0924 18:20:59.655013    8267 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:20:59.655027    8267 kubeadm.go:310] 
	I0924 18:20:59.655110    8267 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:20:59.655116    8267 kubeadm.go:310] 
	I0924 18:20:59.655145    8267 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:20:59.655208    8267 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:20:59.655258    8267 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:20:59.655263    8267 kubeadm.go:310] 
	I0924 18:20:59.655316    8267 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:20:59.655320    8267 kubeadm.go:310] 
	I0924 18:20:59.655367    8267 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:20:59.655371    8267 kubeadm.go:310] 
	I0924 18:20:59.655422    8267 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:20:59.655496    8267 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:20:59.655563    8267 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:20:59.655567    8267 kubeadm.go:310] 
	I0924 18:20:59.655649    8267 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:20:59.655725    8267 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:20:59.655729    8267 kubeadm.go:310] 
	I0924 18:20:59.655811    8267 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1nurv4.815dnmhn5tv4zfj4 \
	I0924 18:20:59.655913    8267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0b60f56f80845488c64d300977f2bf68c41ecd8b64267fb4ecb5eabfad1a9ccb \
	I0924 18:20:59.655933    8267 kubeadm.go:310] 	--control-plane 
	I0924 18:20:59.655937    8267 kubeadm.go:310] 
	I0924 18:20:59.656020    8267 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:20:59.656025    8267 kubeadm.go:310] 
	I0924 18:20:59.656105    8267 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1nurv4.815dnmhn5tv4zfj4 \
	I0924 18:20:59.656205    8267 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0b60f56f80845488c64d300977f2bf68c41ecd8b64267fb4ecb5eabfad1a9ccb 
	I0924 18:20:59.659045    8267 kubeadm.go:310] W0924 18:20:41.776259    1823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:59.659342    8267 kubeadm.go:310] W0924 18:20:41.777324    1823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:59.659554    8267 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0924 18:20:59.659663    8267 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:20:59.659683    8267 cni.go:84] Creating CNI manager for ""
	I0924 18:20:59.659700    8267 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:59.661221    8267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 18:20:59.662781    8267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 18:20:59.671260    8267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 18:20:59.691610    8267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:20:59.691734    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.691813    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-706965 minikube.k8s.io/updated_at=2024_09_24T18_20_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-706965 minikube.k8s.io/primary=true
	I0924 18:20:59.826563    8267 ops.go:34] apiserver oom_adj: -16
	I0924 18:20:59.826685    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.326805    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.827024    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:01.326834    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:01.826900    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:02.327642    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:02.826982    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:03.327087    8267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:03.432004    8267 kubeadm.go:1113] duration metric: took 3.740312159s to wait for elevateKubeSystemPrivileges
	I0924 18:21:03.432037    8267 kubeadm.go:394] duration metric: took 21.810715057s to StartCluster
	I0924 18:21:03.432059    8267 settings.go:142] acquiring lock: {Name:mkf663006618d2085c6b5855124c28d24d611c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:03.432197    8267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:21:03.432575    8267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/kubeconfig: {Name:mk0a90896f4ed59dd706a3da6d92ca196e85870e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:03.432781    8267 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 18:21:03.432893    8267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:21:03.433156    8267 config.go:182] Loaded profile config "addons-706965": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:21:03.433198    8267 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:21:03.433307    8267 addons.go:69] Setting yakd=true in profile "addons-706965"
	I0924 18:21:03.433326    8267 addons.go:234] Setting addon yakd=true in "addons-706965"
	I0924 18:21:03.433350    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.433842    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.434356    8267 addons.go:69] Setting metrics-server=true in profile "addons-706965"
	I0924 18:21:03.434377    8267 addons.go:234] Setting addon metrics-server=true in "addons-706965"
	I0924 18:21:03.434403    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.434861    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438018    8267 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-706965"
	I0924 18:21:03.438099    8267 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-706965"
	I0924 18:21:03.438199    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.438484    8267 addons.go:69] Setting cloud-spanner=true in profile "addons-706965"
	I0924 18:21:03.438568    8267 addons.go:234] Setting addon cloud-spanner=true in "addons-706965"
	I0924 18:21:03.438661    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.439368    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.441941    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.442089    8267 addons.go:69] Setting registry=true in profile "addons-706965"
	I0924 18:21:03.443383    8267 addons.go:234] Setting addon registry=true in "addons-706965"
	I0924 18:21:03.443419    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.443949    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438874    8267 addons.go:69] Setting default-storageclass=true in profile "addons-706965"
	I0924 18:21:03.446466    8267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-706965"
	I0924 18:21:03.446788    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438878    8267 addons.go:69] Setting gcp-auth=true in profile "addons-706965"
	I0924 18:21:03.468360    8267 mustload.go:65] Loading cluster: addons-706965
	I0924 18:21:03.468599    8267 config.go:182] Loaded profile config "addons-706965": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:21:03.468954    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438889    8267 addons.go:69] Setting ingress=true in profile "addons-706965"
	I0924 18:21:03.469384    8267 addons.go:234] Setting addon ingress=true in "addons-706965"
	I0924 18:21:03.469467    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.470094    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438896    8267 addons.go:69] Setting ingress-dns=true in profile "addons-706965"
	I0924 18:21:03.497596    8267 addons.go:234] Setting addon ingress-dns=true in "addons-706965"
	I0924 18:21:03.497761    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.498254    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438903    8267 addons.go:69] Setting inspektor-gadget=true in profile "addons-706965"
	I0924 18:21:03.499197    8267 addons.go:234] Setting addon inspektor-gadget=true in "addons-706965"
	I0924 18:21:03.499246    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.499850    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.508849    8267 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:21:03.510359    8267 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:21:03.510388    8267 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:21:03.510461    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.438930    8267 out.go:177] * Verifying Kubernetes components...
	I0924 18:21:03.515374    8267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:21:03.442099    8267 addons.go:69] Setting storage-provisioner=true in profile "addons-706965"
	I0924 18:21:03.515736    8267 addons.go:234] Setting addon storage-provisioner=true in "addons-706965"
	I0924 18:21:03.515769    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.516222    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.530560    8267 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:21:03.531867    8267 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:21:03.531899    8267 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:21:03.531967    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.442104    8267 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-706965"
	I0924 18:21:03.539738    8267 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-706965"
	I0924 18:21:03.540164    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.442108    8267 addons.go:69] Setting volcano=true in profile "addons-706965"
	I0924 18:21:03.549295    8267 addons.go:234] Setting addon volcano=true in "addons-706965"
	I0924 18:21:03.549341    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.549820    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.442113    8267 addons.go:69] Setting volumesnapshots=true in profile "addons-706965"
	I0924 18:21:03.576423    8267 addons.go:234] Setting addon volumesnapshots=true in "addons-706965"
	I0924 18:21:03.576470    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.576942    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.438861    8267 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-706965"
	I0924 18:21:03.603055    8267 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-706965"
	I0924 18:21:03.603093    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.603556    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.636109    8267 addons.go:234] Setting addon default-storageclass=true in "addons-706965"
	I0924 18:21:03.636151    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.636588    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.644279    8267 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:21:03.644430    8267 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:21:03.648151    8267 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:03.648224    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:21:03.648326    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.648613    8267 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:21:03.650717    8267 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:21:03.650926    8267 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:03.650966    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:21:03.651053    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.673586    8267 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:21:03.673657    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:21:03.673757    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.694370    8267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:21:03.695530    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.697539    8267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:21:03.697767    8267 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 18:21:03.700378    8267 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:03.701798    8267 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:03.702056    8267 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:03.702107    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:21:03.702199    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.703692    8267 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:03.703751    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 18:21:03.703848    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.743135    8267 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 18:21:03.744795    8267 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:03.744814    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 18:21:03.744880    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.773348    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:03.790687    8267 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:21:03.806395    8267 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:21:03.806420    8267 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:21:03.806487    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.845315    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:21:03.845460    8267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:21:03.849305    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:21:03.849329    8267 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:21:03.849393    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.883341    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:03.890533    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:03.891390    8267 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0924 18:21:03.892918    8267 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0924 18:21:03.894255    8267 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0924 18:21:03.895257    8267 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-706965"
	I0924 18:21:03.895294    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:03.895711    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:03.898144    8267 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:21:03.898164    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0924 18:21:03.898227    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.929708    8267 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:03.929729    8267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:21:03.929790    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.929972    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:03.934574    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:21:03.936260    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:21:03.938639    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:21:03.940319    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:21:03.943393    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:21:03.944965    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:21:03.946414    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:21:03.948055    8267 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:21:03.951049    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:21:03.951071    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:21:03.951154    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:03.957355    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.002375    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.010305    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.029280    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.044606    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.080059    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.081620    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.095338    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.096073    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:04.096382    8267 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0924 18:21:04.097490    8267 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0924 18:21:04.097513    8267 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0924 18:21:04.097530    8267 retry.go:31] will retry after 131.782462ms: ssh: handshake failed: EOF
	I0924 18:21:04.097514    8267 retry.go:31] will retry after 287.678786ms: ssh: handshake failed: EOF
	I0924 18:21:04.099120    8267 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:21:04.101336    8267 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:04.101364    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:21:04.101429    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:04.126346    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	W0924 18:21:04.127238    8267 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0924 18:21:04.127259    8267 retry.go:31] will retry after 274.687414ms: ssh: handshake failed: EOF
	I0924 18:21:04.410235    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:04.499454    8267 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:21:04.499527    8267 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:21:04.503386    8267 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:21:04.503454    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:21:04.581580    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:04.685005    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:04.714720    8267 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:21:04.714795    8267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:21:04.914972    8267 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:21:04.915046    8267 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:21:04.934255    8267 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:21:04.934330    8267 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:21:05.020904    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:21:05.069642    8267 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:21:05.069665    8267 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:21:05.084015    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:05.188904    8267 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:21:05.188946    8267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:21:05.228109    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:05.231935    8267 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:05.231965    8267 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:21:05.256935    8267 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:21:05.256964    8267 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:21:05.272670    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:05.336633    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:21:05.336658    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:21:05.406945    8267 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:21:05.406971    8267 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:21:05.489334    8267 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:05.489379    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:21:05.550530    8267 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:21:05.550560    8267 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:21:05.578329    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:05.593283    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:05.623782    8267 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:05.623819    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:21:05.653538    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:21:05.653563    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:21:05.663534    8267 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:21:05.663567    8267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:21:05.692739    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:05.747064    8267 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:21:05.747097    8267 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:21:05.850749    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:21:05.850774    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:21:05.901928    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:05.922984    8267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.228579583s)
	I0924 18:21:05.923012    8267 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0924 18:21:05.924052    8267 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.078552551s)
	I0924 18:21:05.924742    8267 node_ready.go:35] waiting up to 6m0s for node "addons-706965" to be "Ready" ...
	I0924 18:21:05.924904    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.514643608s)
	I0924 18:21:05.934080    8267 node_ready.go:49] node "addons-706965" has status "Ready":"True"
	I0924 18:21:05.934109    8267 node_ready.go:38] duration metric: took 9.338556ms for node "addons-706965" to be "Ready" ...
	I0924 18:21:05.934120    8267 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:05.955886    8267 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:06.004281    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:21:06.004307    8267 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:21:06.172843    8267 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:21:06.172871    8267 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:21:06.307367    8267 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:06.307392    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:21:06.314269    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:21:06.314296    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:21:06.427152    8267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-706965" context rescaled to 1 replicas
	I0924 18:21:06.799422    8267 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:21:06.799443    8267 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:21:06.867561    8267 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:21:06.867585    8267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:21:06.898706    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:07.016253    8267 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:21:07.016320    8267 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:21:07.113198    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:21:07.113261    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:21:07.478506    8267 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:07.478575    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:21:07.532314    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:21:07.532382    8267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:21:07.970336    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:21:07.970364    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:21:07.989018    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:08.025773    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:08.875423    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:21:08.875447    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:21:09.106284    8267 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:09.106307    8267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:21:09.557612    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:10.465565    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:10.733552    8267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:21:10.733638    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:10.762933    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:12.114794    8267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:21:12.459515    8267 addons.go:234] Setting addon gcp-auth=true in "addons-706965"
	I0924 18:21:12.459559    8267 host.go:66] Checking if "addons-706965" exists ...
	I0924 18:21:12.460008    8267 cli_runner.go:164] Run: docker container inspect addons-706965 --format={{.State.Status}}
	I0924 18:21:12.487613    8267 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:21:12.487663    8267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-706965
	I0924 18:21:12.514912    8267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/addons-706965/id_rsa Username:docker}
	I0924 18:21:12.975977    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:14.239553    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.657930141s)
	I0924 18:21:14.239588    8267 addons.go:475] Verifying addon ingress=true in "addons-706965"
	I0924 18:21:14.239731    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.554652219s)
	I0924 18:21:14.241629    8267 out.go:177] * Verifying ingress addon...
	I0924 18:21:14.243669    8267 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 18:21:14.248813    8267 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 18:21:14.248837    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.748714    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.248652    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.462606    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:15.748555    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.311582    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.861802    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.279180    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.490072    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:17.637637    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.616695328s)
	I0924 18:21:17.637717    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.553677421s)
	I0924 18:21:17.637934    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.409802653s)
	I0924 18:21:17.637991    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.365299249s)
	I0924 18:21:17.638034    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.059679168s)
	I0924 18:21:17.638237    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.044926326s)
	I0924 18:21:17.638329    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.945554165s)
	I0924 18:21:17.638352    8267 addons.go:475] Verifying addon metrics-server=true in "addons-706965"
	I0924 18:21:17.638369    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.736416294s)
	I0924 18:21:17.638382    8267 addons.go:475] Verifying addon registry=true in "addons-706965"
	I0924 18:21:17.638451    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.73972333s)
	W0924 18:21:17.638477    8267 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:17.638492    8267 retry.go:31] will retry after 344.777976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:17.638558    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.612755875s)
	I0924 18:21:17.638745    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.081106622s)
	I0924 18:21:17.638754    8267 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-706965"
	I0924 18:21:17.639218    8267 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.151582642s)
	I0924 18:21:17.642472    8267 out.go:177] * Verifying registry addon...
	I0924 18:21:17.642472    8267 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-706965 service yakd-dashboard -n yakd-dashboard
	
	I0924 18:21:17.645046    8267 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:17.645056    8267 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:21:17.648414    8267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:21:17.651909    8267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:21:17.654385    8267 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:21:17.657732    8267 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:21:17.657776    8267 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:21:17.694004    8267 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:21:17.698116    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.694824    8267 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:21:17.698212    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0924 18:21:17.695330    8267 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0924 18:21:17.801307    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.814847    8267 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:21:17.814920    8267 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:21:17.981983    8267 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:17.982008    8267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:21:17.983962    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:18.137950    8267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:18.208311    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.209873    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.264099    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:18.652946    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.659270    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.748538    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.152862    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:19.156568    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.253869    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.671324    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.671929    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:19.764510    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.920511    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.782523561s)
	I0924 18:21:19.920857    8267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.936858976s)
	I0924 18:21:19.923740    8267 addons.go:475] Verifying addon gcp-auth=true in "addons-706965"
	I0924 18:21:19.927013    8267 out.go:177] * Verifying gcp-auth addon...
	I0924 18:21:19.930699    8267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:21:19.934023    8267 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:21:19.962702    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:20.155169    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.159266    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.248443    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:20.653160    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.656966    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.754424    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.154076    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.156959    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.247834    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.655672    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.657658    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.747844    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.963347    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:22.152840    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.156264    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.254421    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.651898    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.656864    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.753688    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.152191    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.156474    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.250352    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.652604    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.656665    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.749217    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.152707    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.156620    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.247839    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.462469    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:24.652828    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.657228    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.748728    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.153368    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:25.156535    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.249103    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.653176    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:25.656510    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.748179    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:26.153265    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.157374    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.248811    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:26.466132    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:26.654407    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.658405    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.755916    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.153784    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.159549    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.248062    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.653600    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.656383    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.749074    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.152849    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.156218    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.248065    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.652933    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.656875    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.748638    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.963089    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:29.153352    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.157423    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.252263    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.652041    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.656893    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.748363    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.153365    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.158491    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.248551    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.652504    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.656771    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.748017    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.152837    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.156192    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.248732    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.462110    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:31.652863    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.656807    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.748904    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.153330    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:32.156401    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.248928    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.652847    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:32.656409    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.748918    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.152495    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.156665    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.248486    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.473253    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:33.656491    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.668795    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.748319    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:34.160859    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.184150    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.248198    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:34.654806    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.656884    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.756403    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.152203    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.156714    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.253466    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.652015    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.655915    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.748584    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.972300    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:36.152512    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.156696    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.250021    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.655204    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.660296    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.750308    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.154727    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:37.160066    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:37.250091    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.655508    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:37.660426    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:37.748891    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.154772    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.161808    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.249851    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.462830    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:38.664178    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.664785    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.748639    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.152963    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.156686    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.247811    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.652461    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.657241    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.748915    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.152962    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:40.157299    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.248632    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.463506    8267 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:40.653230    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:40.656110    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.748089    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.155856    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:41.167488    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.273614    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.652752    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:41.657037    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.748404    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.152961    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:42.157482    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.249934    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.465037    8267 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:42.465062    8267 pod_ready.go:82] duration metric: took 36.509140872s for pod "coredns-7c65d6cfc9-8nhmz" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.465073    8267 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hbx6c" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.467138    8267 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hbx6c" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hbx6c" not found
	I0924 18:21:42.467165    8267 pod_ready.go:82] duration metric: took 2.083752ms for pod "coredns-7c65d6cfc9-hbx6c" in "kube-system" namespace to be "Ready" ...
	E0924 18:21:42.467176    8267 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hbx6c" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hbx6c" not found
	I0924 18:21:42.467183    8267 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.473997    8267 pod_ready.go:93] pod "etcd-addons-706965" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:42.474022    8267 pod_ready.go:82] duration metric: took 6.83248ms for pod "etcd-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.474033    8267 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.482148    8267 pod_ready.go:93] pod "kube-apiserver-addons-706965" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:42.482168    8267 pod_ready.go:82] duration metric: took 8.127816ms for pod "kube-apiserver-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.482178    8267 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.490116    8267 pod_ready.go:93] pod "kube-controller-manager-addons-706965" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:42.490140    8267 pod_ready.go:82] duration metric: took 7.954858ms for pod "kube-controller-manager-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.490151    8267 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zs4bl" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.653508    8267 kapi.go:107] duration metric: took 25.00509076s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:21:42.657336    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.659891    8267 pod_ready.go:93] pod "kube-proxy-zs4bl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:42.659914    8267 pod_ready.go:82] duration metric: took 169.756455ms for pod "kube-proxy-zs4bl" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.659924    8267 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:42.748815    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.059489    8267 pod_ready.go:93] pod "kube-scheduler-addons-706965" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:43.059515    8267 pod_ready.go:82] duration metric: took 399.582796ms for pod "kube-scheduler-addons-706965" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:43.059527    8267 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h4jpw" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:43.156873    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.249912    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.459786    8267 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-h4jpw" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:43.459845    8267 pod_ready.go:82] duration metric: took 400.309709ms for pod "nvidia-device-plugin-daemonset-h4jpw" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:43.459869    8267 pod_ready.go:39] duration metric: took 37.525736727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:43.459902    8267 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:21:43.459982    8267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:21:43.475106    8267 api_server.go:72] duration metric: took 40.042290939s to wait for apiserver process to appear ...
	I0924 18:21:43.475131    8267 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:21:43.475152    8267 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0924 18:21:43.482968    8267 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0924 18:21:43.484065    8267 api_server.go:141] control plane version: v1.31.1
	I0924 18:21:43.484092    8267 api_server.go:131] duration metric: took 8.954141ms to wait for apiserver health ...
	I0924 18:21:43.484102    8267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:21:43.657549    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.668293    8267 system_pods.go:59] 17 kube-system pods found
	I0924 18:21:43.668333    8267 system_pods.go:61] "coredns-7c65d6cfc9-8nhmz" [3bb63f12-2f52-4559-8611-ae2c8103ecd0] Running
	I0924 18:21:43.668343    8267 system_pods.go:61] "csi-hostpath-attacher-0" [12a1b9c7-76f8-4800-84ea-fefea4633731] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:43.668350    8267 system_pods.go:61] "csi-hostpath-resizer-0" [a8da6569-37a8-479a-978e-34c20932b0df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:43.668360    8267 system_pods.go:61] "csi-hostpathplugin-fvg5q" [560d607e-d28a-413d-ae8a-cbe82df9fe10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:43.668364    8267 system_pods.go:61] "etcd-addons-706965" [c2552a6f-dd38-415d-856b-879b8802bce4] Running
	I0924 18:21:43.668370    8267 system_pods.go:61] "kube-apiserver-addons-706965" [471cf0da-be46-40e6-8d44-d4128566413c] Running
	I0924 18:21:43.668374    8267 system_pods.go:61] "kube-controller-manager-addons-706965" [c5896096-7ab2-45e9-9b51-c700f291102d] Running
	I0924 18:21:43.668380    8267 system_pods.go:61] "kube-ingress-dns-minikube" [df9b96eb-c7fd-4b07-9fcd-f862e608c7be] Running
	I0924 18:21:43.668389    8267 system_pods.go:61] "kube-proxy-zs4bl" [0656d6fd-f336-430a-b38e-c1b7e7acee8a] Running
	I0924 18:21:43.668393    8267 system_pods.go:61] "kube-scheduler-addons-706965" [d6d43568-6a8e-45e9-ac21-4b02d1d39d14] Running
	I0924 18:21:43.668398    8267 system_pods.go:61] "metrics-server-84c5f94fbc-h5hgc" [b59a81ec-81e4-43e5-be73-e8124168ec83] Running
	I0924 18:21:43.668405    8267 system_pods.go:61] "nvidia-device-plugin-daemonset-h4jpw" [ca8e2bf5-a3a8-45ff-982d-6671ac0cdd2e] Running
	I0924 18:21:43.668409    8267 system_pods.go:61] "registry-66c9cd494c-rjzd4" [87b7747e-ac02-4a01-b537-3ccd964580e8] Running
	I0924 18:21:43.668413    8267 system_pods.go:61] "registry-proxy-hgvv7" [cbaedb6d-c887-4a40-9fe1-fd09e1825332] Running
	I0924 18:21:43.668419    8267 system_pods.go:61] "snapshot-controller-56fcc65765-djwjf" [10da51e2-09db-407d-b819-bc0ade169aa2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:43.668429    8267 system_pods.go:61] "snapshot-controller-56fcc65765-ws45j" [2161b58c-ba9d-4855-b8fc-685a40b1abe1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:43.668434    8267 system_pods.go:61] "storage-provisioner" [c449bcc6-4b53-4066-8f99-7b96d527a53d] Running
	I0924 18:21:43.668440    8267 system_pods.go:74] duration metric: took 184.331658ms to wait for pod list to return data ...
	I0924 18:21:43.668450    8267 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:21:43.748652    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.859673    8267 default_sa.go:45] found service account: "default"
	I0924 18:21:43.859697    8267 default_sa.go:55] duration metric: took 191.241269ms for default service account to be created ...
	I0924 18:21:43.859706    8267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:21:44.067396    8267 system_pods.go:86] 17 kube-system pods found
	I0924 18:21:44.067489    8267 system_pods.go:89] "coredns-7c65d6cfc9-8nhmz" [3bb63f12-2f52-4559-8611-ae2c8103ecd0] Running
	I0924 18:21:44.067517    8267 system_pods.go:89] "csi-hostpath-attacher-0" [12a1b9c7-76f8-4800-84ea-fefea4633731] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:44.067556    8267 system_pods.go:89] "csi-hostpath-resizer-0" [a8da6569-37a8-479a-978e-34c20932b0df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:44.067589    8267 system_pods.go:89] "csi-hostpathplugin-fvg5q" [560d607e-d28a-413d-ae8a-cbe82df9fe10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:44.067611    8267 system_pods.go:89] "etcd-addons-706965" [c2552a6f-dd38-415d-856b-879b8802bce4] Running
	I0924 18:21:44.067630    8267 system_pods.go:89] "kube-apiserver-addons-706965" [471cf0da-be46-40e6-8d44-d4128566413c] Running
	I0924 18:21:44.067663    8267 system_pods.go:89] "kube-controller-manager-addons-706965" [c5896096-7ab2-45e9-9b51-c700f291102d] Running
	I0924 18:21:44.067686    8267 system_pods.go:89] "kube-ingress-dns-minikube" [df9b96eb-c7fd-4b07-9fcd-f862e608c7be] Running
	I0924 18:21:44.067705    8267 system_pods.go:89] "kube-proxy-zs4bl" [0656d6fd-f336-430a-b38e-c1b7e7acee8a] Running
	I0924 18:21:44.067727    8267 system_pods.go:89] "kube-scheduler-addons-706965" [d6d43568-6a8e-45e9-ac21-4b02d1d39d14] Running
	I0924 18:21:44.067746    8267 system_pods.go:89] "metrics-server-84c5f94fbc-h5hgc" [b59a81ec-81e4-43e5-be73-e8124168ec83] Running
	I0924 18:21:44.067773    8267 system_pods.go:89] "nvidia-device-plugin-daemonset-h4jpw" [ca8e2bf5-a3a8-45ff-982d-6671ac0cdd2e] Running
	I0924 18:21:44.067799    8267 system_pods.go:89] "registry-66c9cd494c-rjzd4" [87b7747e-ac02-4a01-b537-3ccd964580e8] Running
	I0924 18:21:44.067820    8267 system_pods.go:89] "registry-proxy-hgvv7" [cbaedb6d-c887-4a40-9fe1-fd09e1825332] Running
	I0924 18:21:44.067843    8267 system_pods.go:89] "snapshot-controller-56fcc65765-djwjf" [10da51e2-09db-407d-b819-bc0ade169aa2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:44.067876    8267 system_pods.go:89] "snapshot-controller-56fcc65765-ws45j" [2161b58c-ba9d-4855-b8fc-685a40b1abe1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:44.067901    8267 system_pods.go:89] "storage-provisioner" [c449bcc6-4b53-4066-8f99-7b96d527a53d] Running
	I0924 18:21:44.067924    8267 system_pods.go:126] duration metric: took 208.211404ms to wait for k8s-apps to be running ...
	I0924 18:21:44.067944    8267 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:21:44.068032    8267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:21:44.081884    8267 system_svc.go:56] duration metric: took 13.931265ms WaitForService to wait for kubelet
	I0924 18:21:44.081953    8267 kubeadm.go:582] duration metric: took 40.649142995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:21:44.081987    8267 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:21:44.156422    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.248279    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.260695    8267 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0924 18:21:44.260728    8267 node_conditions.go:123] node cpu capacity is 2
	I0924 18:21:44.260742    8267 node_conditions.go:105] duration metric: took 178.73323ms to run NodePressure ...
	I0924 18:21:44.260752    8267 start.go:241] waiting for startup goroutines ...
	I0924 18:21:44.658186    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.748513    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.160049    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.248639    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.657619    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.748408    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.158217    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.249393    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.657076    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.748404    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.157764    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.248835    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.657545    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.748549    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.158144    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.258136    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.668876    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.768280    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.157617    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.248092    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.656666    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.749189    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.156592    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.248246    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.656932    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.854766    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.160666    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.260714    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.656559    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.748187    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.156745    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:52.247966    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.657363    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:52.757904    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.158409    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.258200    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.658009    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.749723    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.157183    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.248183    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.656268    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.749338    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.159911    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.248256    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.656528    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.748319    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.157483    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.253353    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.656902    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.748865    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.160861    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.247914    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.657410    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.748646    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.230478    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.251315    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.657453    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.757743    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.158093    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.258935    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.657977    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.747840    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.160749    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.255208    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.656959    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.747889    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.157261    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.248696    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.657272    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.749474    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.157029    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:02.248817    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.657206    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:02.748956    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.157354    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.249672    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.657025    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.747988    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.158034    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.248545    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.657045    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.748635    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.156806    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.249697    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.657063    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.748345    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.157925    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.248229    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.657026    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.749783    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.158959    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.259540    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.657907    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.747948    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.158153    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:08.248367    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.657270    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:08.748576    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.157085    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:09.248125    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.657331    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:09.750107    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.158174    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:10.258959    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.656870    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:10.748573    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.156576    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:11.248798    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.657569    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:11.748913    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.157633    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:12.248889    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.657356    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:12.748077    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.156818    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:13.248438    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.658130    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:13.748468    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.157403    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:14.257413    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.656706    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:14.747776    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.156904    8267 kapi.go:107] duration metric: took 57.504994405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:22:15.248098    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.747519    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.248952    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.747736    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.248997    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.747692    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.248813    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.748434    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:19.248938    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:19.747707    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:20.256602    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:20.748159    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:21.248569    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:21.749099    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:22.248659    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:22.753539    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:23.250216    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:23.749277    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:24.269196    8267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:24.749533    8267 kapi.go:107] duration metric: took 1m10.505852985s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 18:22:41.936305    8267 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:22:41.936336    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:42.434389    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:42.934407    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:43.435893    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:43.935044    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:44.435227    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:44.934984    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:45.434681    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:45.934664    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:46.434784    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:46.934709    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:47.434669    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:47.934401    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:48.434438    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:48.935021    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:49.433920    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:49.934819    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:50.434472    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:50.934630    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:51.435387    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:51.933978    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:52.433919    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:52.935600    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:53.435054    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:53.934719    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:54.434400    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:54.934925    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:55.434396    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:55.934652    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:56.435314    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:56.934340    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:57.435010    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:57.934928    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:58.434584    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:58.934197    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:59.434665    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:59.934821    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:00.434301    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:00.934545    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:01.434686    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:01.934661    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:02.434008    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:02.934718    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:03.434466    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:03.934873    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:04.434524    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:04.934307    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:05.434361    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:05.933774    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:06.434343    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:06.933883    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:07.434703    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:07.933921    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:08.434892    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:08.934687    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:09.435064    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:09.935134    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:10.435147    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:10.935685    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:11.434544    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:11.935316    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:12.433791    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:12.933873    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:13.435251    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:13.935183    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:14.433924    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:14.935064    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:15.434070    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:15.934559    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:16.434289    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:16.933838    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:17.435155    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:17.934192    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:18.435113    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:18.935284    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:19.434977    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:19.935870    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:20.434206    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:20.934332    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:21.434809    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:21.934142    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:22.434442    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:22.934389    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:23.435511    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:23.934117    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:24.434966    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:24.934431    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:25.434928    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:25.934780    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:26.434467    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:26.934486    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:27.434465    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:27.934197    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:28.435114    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:28.934180    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:29.434072    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:29.934818    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:30.434329    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:30.935032    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:31.434959    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:31.934628    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:32.433960    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:32.933940    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:33.434878    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:33.935152    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:34.433881    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:34.934353    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:35.434729    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:35.934522    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:36.434658    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:36.934582    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:37.434001    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:37.934778    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:38.438444    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:38.934479    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:39.434864    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:39.935333    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:40.434034    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:40.934860    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:41.435092    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:41.934822    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:42.434797    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:42.934756    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:43.434724    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:43.935188    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:44.434074    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:44.935000    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:45.434551    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:45.934003    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:46.434963    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:46.934349    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:47.434833    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:47.934279    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:48.435146    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:48.934209    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:49.435092    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:49.935038    8267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:23:50.436908    8267 kapi.go:107] duration metric: took 2m30.506204579s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:23:50.439648    8267 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-706965 cluster.
	I0924 18:23:50.442580    8267 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:23:50.445016    8267 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:23:50.448026    8267 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, volcano, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 18:23:50.450639    8267 addons.go:510] duration metric: took 2m47.017434336s for enable addons: enabled=[nvidia-device-plugin cloud-spanner volcano ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0924 18:23:50.450698    8267 start.go:246] waiting for cluster config update ...
	I0924 18:23:50.450719    8267 start.go:255] writing updated cluster config ...
	I0924 18:23:50.451054    8267 ssh_runner.go:195] Run: rm -f paused
	I0924 18:23:50.793252    8267 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:23:50.801204    8267 out.go:177] * Done! kubectl is now configured to use "addons-706965" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 24 18:33:32 addons-706965 dockerd[1287]: time="2024-09-24T18:33:32.158020293Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d83cb9126fde4ab6 traceID=98efad86f8f474c0d9e17d3576c1ac9f
	Sep 24 18:33:34 addons-706965 dockerd[1287]: time="2024-09-24T18:33:34.804750501Z" level=info msg="ignoring event" container=f9f7e572436cff987797d28cc7ce81f2c23ef1baa8bb74154131a6ef1ec263da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:34 addons-706965 dockerd[1287]: time="2024-09-24T18:33:34.919136420Z" level=info msg="ignoring event" container=45527baf02eb213804d514323177fd3d7e3a7d36f96769087c36f9950d409f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.657882778Z" level=info msg="ignoring event" container=8bf051323bd90688d12a8a335d2643aa5eae7ee978c12785876d1298a2ef189d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.718049516Z" level=info msg="ignoring event" container=8485215d3addfdde0a91952f50c1370f576eb2297e8c41a74ef2c0452bfd9c67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.724870707Z" level=info msg="ignoring event" container=ace3f8627aa945c638b40914162181fd2c4f2857126fea202a5ede5e53cdabc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.729722010Z" level=info msg="ignoring event" container=2d57b9beffc5d13931ad106cd591abed2de16b75afd1fceeda4a6648b67a1200 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.736130599Z" level=info msg="ignoring event" container=4f61a0478aa95b7e3ad69ceb565401598f4f6432624495282ade564aa7e31ea9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.761040184Z" level=info msg="ignoring event" container=da4fbeb7d506a9ba50d5f33edc6360f874bfec7cf7b501a87d2721248698f85f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.768831258Z" level=info msg="ignoring event" container=ac6942813f601157de7895d54f57f55f4149251ddd7e69a0adbac80a5ea75f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.773199323Z" level=info msg="ignoring event" container=fdf586e990809c8d17145a577bfca2bc54a1826390aa4949cd2e2622fdeaa205 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.940842293Z" level=info msg="ignoring event" container=54dd833187f722906a06a5c52dcc8eecbe70312e63cae71360d22458d4eaea0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:36 addons-706965 dockerd[1287]: time="2024-09-24T18:33:36.979337029Z" level=info msg="ignoring event" container=8dc462ef469359f3a56f68b43bb27b000521406ce146c113afc5ef0fc3b6514e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:37 addons-706965 dockerd[1287]: time="2024-09-24T18:33:37.103999967Z" level=info msg="ignoring event" container=6729b5434cb9c0541e71081c1db50460d8ddbeda18b3ce663b57910473036741 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:43 addons-706965 dockerd[1287]: time="2024-09-24T18:33:43.203296795Z" level=info msg="ignoring event" container=b63c3d95f9c7e452efec270421117827971fab54563a526a174d9c37842ccab0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:43 addons-706965 dockerd[1287]: time="2024-09-24T18:33:43.230917444Z" level=info msg="ignoring event" container=bc2a54cefaa0e101399d751b696ce41dcdf7e29fa614403a7c405639e4c24c8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:43 addons-706965 dockerd[1287]: time="2024-09-24T18:33:43.412975965Z" level=info msg="ignoring event" container=b8cf1a5cd0a8dfd1433d16c42b287cd459c5451e92ef658eb626fc3ffe6de2bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:43 addons-706965 dockerd[1287]: time="2024-09-24T18:33:43.462913154Z" level=info msg="ignoring event" container=296f083ac846e2ddaa78b282e1a25a6a39e3099067dc8af85b43657636f12c4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:48 addons-706965 dockerd[1287]: time="2024-09-24T18:33:48.603213662Z" level=info msg="ignoring event" container=bccf151124dad69e5e3e207bdf8e8ed0623c28067e307f54fe5f7b53a5019e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:49 addons-706965 dockerd[1287]: time="2024-09-24T18:33:49.464785524Z" level=info msg="ignoring event" container=c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:49 addons-706965 dockerd[1287]: time="2024-09-24T18:33:49.504504468Z" level=info msg="ignoring event" container=a3b14bc765c787c5705c45dea1acf7365d944e6a09ad41e2568efe7f33d9b3da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:49 addons-706965 dockerd[1287]: time="2024-09-24T18:33:49.659460168Z" level=info msg="ignoring event" container=8ad6780b50c079382a93419be67f106aa846a12c6638022c88921411e30ef5f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:49 addons-706965 dockerd[1287]: time="2024-09-24T18:33:49.746577008Z" level=info msg="ignoring event" container=3c4cf4afb938ec34bafc24dee8e645121f0a74fa251c329d29ed60c62bcb7002 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:50 addons-706965 dockerd[1287]: time="2024-09-24T18:33:50.325385347Z" level=info msg="ignoring event" container=9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:33:50 addons-706965 dockerd[1287]: time="2024-09-24T18:33:50.455554412Z" level=info msg="ignoring event" container=c7fc72e4f361983a4a1c0d196cdbfe536093f6ee3f9767454a2826d4bf35fd74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	2e3f3e002c738       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            46 seconds ago      Exited              gadget                     7                   8bfcd61d6e982       gadget-249vh
	642635ad0143e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   565b40aaed260       gcp-auth-89d5ffd79-mvdzv
	e487d67cf3d43       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   9835175a51ea8       ingress-nginx-controller-bc57996ff-xtczq
	0d15ec6ba2b0d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   173dee7589ec7       ingress-nginx-admission-patch-rjbwc
	c35334bf8630d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   36708db49f62e       ingress-nginx-admission-create-zkqpg
	abd407477c864       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   ad50e8d37c54d       local-path-provisioner-86d989889c-gt678
	568dac2265c5d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   0bf4e8b271ea9       yakd-dashboard-67d98fc6b-vhzds
	9a0fb33491edf       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Exited              metrics-server             0                   c7fc72e4f3619       metrics-server-84c5f94fbc-h5hgc
	bb87d1a9e5fa4       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   5b59e13868852       kube-ingress-dns-minikube
	ade5f8207e5aa       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator     0                   fb67d36efc8f1       cloud-spanner-emulator-5b584cc74-svwq9
	7a471eb9feef7       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   7529ef96df0f8       nvidia-device-plugin-daemonset-h4jpw
	3dcc4af57f6ad       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   c4bfc1712d9fa       storage-provisioner
	3be62d8575582       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   f814d827c63ec       kube-proxy-zs4bl
	25e0a6a7b6faf       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   1e30b4155a47b       coredns-7c65d6cfc9-8nhmz
	ad8934b19b921       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   e8c60751280f6       kube-scheduler-addons-706965
	736e4a29396c7       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   9bd96327e25da       kube-apiserver-addons-706965
	76217f763ead8       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   26788482db6ec       etcd-addons-706965
	1b1f3665e37ff       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   7718993ef3add       kube-controller-manager-addons-706965
	
	
	==> controller_ingress [e487d67cf3d4] <==
	I0924 18:22:24.142171       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0924 18:22:24.478591       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0924 18:22:24.504781       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0924 18:22:24.520853       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0924 18:22:24.543364       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0921a3a0-a407-4da0-bd32-8bd5c25ac18a", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0924 18:22:24.552213       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"cb8fe104-742e-4ba6-b19b-8a89ecd38c8e", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0924 18:22:24.552287       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"36b6e23c-58c5-44a4-9b44-9f5326ec09e9", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0924 18:22:25.724243       6 nginx.go:317] "Starting NGINX process"
	I0924 18:22:25.724293       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0924 18:22:25.729932       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0924 18:22:25.732982       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0924 18:22:25.740637       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0924 18:22:25.741247       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xtczq"
	I0924 18:22:25.748907       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xtczq" node="addons-706965"
	I0924 18:22:25.789305       6 controller.go:213] "Backend successfully reloaded"
	I0924 18:22:25.789386       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0924 18:22:25.789479       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xtczq", UID:"b25bb7f6-8fbb-4136-bb5c-54a1202ca2f9", APIVersion:"v1", ResourceVersion:"1238", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [25e0a6a7b6fa] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[166005451]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:21:05.451) (total time: 30001ms):
	Trace[166005451]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:21:35.452)
	Trace[166005451]: [30.001037589s] [30.001037589s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1238531312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:21:05.451) (total time: 30000ms):
	Trace[1238531312]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:21:35.452)
	Trace[1238531312]: [30.000908777s] [30.000908777s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47116 - 20229 "HINFO IN 7303047396330757251.2575600522816382858. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069271169s
	[INFO] 10.244.0.25:42532 - 29370 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264572s
	[INFO] 10.244.0.25:57727 - 50366 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155109s
	[INFO] 10.244.0.25:43832 - 28437 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115568s
	[INFO] 10.244.0.25:38826 - 24300 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011236s
	[INFO] 10.244.0.25:53075 - 39386 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112697s
	[INFO] 10.244.0.25:42691 - 48801 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108626s
	[INFO] 10.244.0.25:37863 - 6692 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002115927s
	[INFO] 10.244.0.25:43920 - 59327 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003164813s
	[INFO] 10.244.0.25:58135 - 35589 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001767352s
	[INFO] 10.244.0.25:55460 - 55365 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001924815s
	
	
	==> describe nodes <==
	Name:               addons-706965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-706965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-706965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_20_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-706965
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:20:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-706965
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:33:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:33:03 +0000   Tue, 24 Sep 2024 18:20:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:33:03 +0000   Tue, 24 Sep 2024 18:20:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:33:03 +0000   Tue, 24 Sep 2024 18:20:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:33:03 +0000   Tue, 24 Sep 2024 18:20:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-706965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a50b63db708649cf92a37094dbc99dd7
	  System UUID:                9900c6fb-344b-47d2-ac30-597178f2ca45
	  Boot ID:                    522dd362-4466-4c91-87cc-97b45aa342c2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-5b584cc74-svwq9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-249vh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-mvdzv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xtczq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-8nhmz                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-706965                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-706965                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-706965       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zs4bl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-706965                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-h4jpw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-gt678     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-vhzds              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-706965 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-706965 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-706965 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-706965 event: Registered Node addons-706965 in Controller
	
	
	==> dmesg <==
	[Sep24 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015241] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.461689] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.832545] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.415185] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [76217f763ead] <==
	{"level":"info","ts":"2024-09-24T18:20:52.965459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-24T18:20:52.965517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T18:20:52.965546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T18:20:52.965605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-24T18:20:52.965644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T18:20:52.969313Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:52.973448Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-706965 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T18:20:52.973761Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:52.974077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:52.974253Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:52.974399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:20:52.974863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:20:52.988335Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:20:52.989724Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T18:20:52.998051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:20:53.008600Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-24T18:20:53.037263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T18:20:53.041167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T18:21:50.851593Z","caller":"traceutil/trace.go:171","msg":"trace[141924304] linearizableReadLoop","detail":"{readStateIndex:1099; appliedIndex:1098; }","duration":"105.544803ms","start":"2024-09-24T18:21:50.746033Z","end":"2024-09-24T18:21:50.851577Z","steps":["trace[141924304] 'read index received'  (duration: 68.026325ms)","trace[141924304] 'applied index is now lower than readState.Index'  (duration: 37.517846ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:21:50.851725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.674815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:21:50.851753Z","caller":"traceutil/trace.go:171","msg":"trace[1416018625] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1074; }","duration":"105.719492ms","start":"2024-09-24T18:21:50.746027Z","end":"2024-09-24T18:21:50.851746Z","steps":["trace[1416018625] 'agreement among raft nodes before linearized reading'  (duration: 105.61801ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:21:50.851893Z","caller":"traceutil/trace.go:171","msg":"trace[603020897] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"107.246463ms","start":"2024-09-24T18:21:50.744639Z","end":"2024-09-24T18:21:50.851885Z","steps":["trace[603020897] 'process raft request'  (duration: 69.447752ms)","trace[603020897] 'compare'  (duration: 37.262129ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:30:53.900787Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1855}
	{"level":"info","ts":"2024-09-24T18:30:53.954914Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1855,"took":"53.586122ms","hash":2386339281,"current-db-size-bytes":8667136,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4894720,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-24T18:30:53.954975Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2386339281,"revision":1855,"compact-revision":-1}
	
	
	==> gcp-auth [642635ad0143] <==
	2024/09/24 18:23:49 GCP Auth Webhook started!
	2024/09/24 18:24:07 Ready to marshal response ...
	2024/09/24 18:24:07 Ready to write response ...
	2024/09/24 18:24:07 Ready to marshal response ...
	2024/09/24 18:24:07 Ready to write response ...
	2024/09/24 18:24:32 Ready to marshal response ...
	2024/09/24 18:24:32 Ready to write response ...
	2024/09/24 18:24:32 Ready to marshal response ...
	2024/09/24 18:24:32 Ready to write response ...
	2024/09/24 18:24:33 Ready to marshal response ...
	2024/09/24 18:24:33 Ready to write response ...
	2024/09/24 18:32:37 Ready to marshal response ...
	2024/09/24 18:32:37 Ready to write response ...
	2024/09/24 18:32:37 Ready to marshal response ...
	2024/09/24 18:32:37 Ready to write response ...
	2024/09/24 18:32:37 Ready to marshal response ...
	2024/09/24 18:32:37 Ready to write response ...
	2024/09/24 18:32:48 Ready to marshal response ...
	2024/09/24 18:32:48 Ready to write response ...
	2024/09/24 18:33:04 Ready to marshal response ...
	2024/09/24 18:33:04 Ready to write response ...
	2024/09/24 18:33:27 Ready to marshal response ...
	2024/09/24 18:33:27 Ready to write response ...
	
	
	==> kernel <==
	 18:33:51 up 16 min,  0 users,  load average: 1.24, 0.70, 0.54
	Linux addons-706965 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [736e4a29396c] <==
	I0924 18:24:23.259415       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:24:23.309427       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:24:23.372621       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0924 18:24:23.419347       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0924 18:24:24.013710       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0924 18:24:24.013724       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0924 18:24:24.013747       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0924 18:24:24.069507       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0924 18:24:24.373713       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0924 18:24:24.657483       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0924 18:32:37.082133       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.51.153"}
	I0924 18:33:12.519005       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0924 18:33:42.924084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:33:42.924128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:33:42.948221       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:33:42.948269       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:33:42.955909       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:33:42.955957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:33:42.992877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:33:42.992945       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:33:43.118523       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:33:43.119757       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0924 18:33:43.957242       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0924 18:33:44.119397       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0924 18:33:44.231108       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [1b1f3665e37f] <==
	I0924 18:33:37.679249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-706965"
	I0924 18:33:43.144836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="4.784µs"
	W0924 18:33:43.897758       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:43.897799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0924 18:33:43.958931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0924 18:33:44.121865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0924 18:33:44.232923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:44.809863       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:44.809906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:45.260611       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:45.260654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:45.594146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:45.594185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:45.667443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:45.667488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:47.407772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:47.407820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:47.897003       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:47.897046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:48.163326       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:48.163368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:48.323408       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:48.323455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:33:49.188730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.299µs"
	I0924 18:33:49.373680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.348µs"
	
	
	==> kube-proxy [3be62d857558] <==
	I0924 18:21:06.779238       1 server_linux.go:66] "Using iptables proxy"
	I0924 18:21:06.887337       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0924 18:21:06.887403       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:21:06.915375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0924 18:21:06.915450       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:21:06.917946       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:21:06.918321       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:21:06.918343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:21:06.924327       1 config.go:199] "Starting service config controller"
	I0924 18:21:06.924367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:21:06.924392       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:21:06.924397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:21:06.924996       1 config.go:328] "Starting node config controller"
	I0924 18:21:06.925025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:21:07.025732       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:21:07.025770       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:21:07.025796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ad8934b19b92] <==
	W0924 18:20:56.816622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:56.816725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.816868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:20:56.817125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.817031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 18:20:56.817417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.817082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:20:56.817634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.820164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:56.820300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.820456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:56.820554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.820690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:20:56.820788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.824796       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:56.824976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.825182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:56.825311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.825493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:56.825605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:56.825864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 18:20:56.825998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:57.809443       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:20:57.809720       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 18:21:00.806548       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.779729    2362 scope.go:117] "RemoveContainer" containerID="c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082"
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.825283    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdj7x\" (UniqueName: \"kubernetes.io/projected/87b7747e-ac02-4a01-b537-3ccd964580e8-kube-api-access-vdj7x\") pod \"87b7747e-ac02-4a01-b537-3ccd964580e8\" (UID: \"87b7747e-ac02-4a01-b537-3ccd964580e8\") "
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.826627    2362 scope.go:117] "RemoveContainer" containerID="c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082"
	Sep 24 18:33:49 addons-706965 kubelet[2362]: E0924 18:33:49.828254    2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082" containerID="c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082"
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.828411    2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082"} err="failed to get container status \"c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082\": rpc error: code = Unknown desc = Error response from daemon: No such container: c446902fac4b12b1969d65f0ca280640493c0badf5eacad1de9a6ced54a8d082"
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.828616    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b7747e-ac02-4a01-b537-3ccd964580e8-kube-api-access-vdj7x" (OuterVolumeSpecName: "kube-api-access-vdj7x") pod "87b7747e-ac02-4a01-b537-3ccd964580e8" (UID: "87b7747e-ac02-4a01-b537-3ccd964580e8"). InnerVolumeSpecName "kube-api-access-vdj7x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.862723    2362 scope.go:117] "RemoveContainer" containerID="a3b14bc765c787c5705c45dea1acf7365d944e6a09ad41e2568efe7f33d9b3da"
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.926082    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbcb5\" (UniqueName: \"kubernetes.io/projected/cbaedb6d-c887-4a40-9fe1-fd09e1825332-kube-api-access-sbcb5\") pod \"cbaedb6d-c887-4a40-9fe1-fd09e1825332\" (UID: \"cbaedb6d-c887-4a40-9fe1-fd09e1825332\") "
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.926207    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vdj7x\" (UniqueName: \"kubernetes.io/projected/87b7747e-ac02-4a01-b537-3ccd964580e8-kube-api-access-vdj7x\") on node \"addons-706965\" DevicePath \"\""
	Sep 24 18:33:49 addons-706965 kubelet[2362]: I0924 18:33:49.928633    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaedb6d-c887-4a40-9fe1-fd09e1825332-kube-api-access-sbcb5" (OuterVolumeSpecName: "kube-api-access-sbcb5") pod "cbaedb6d-c887-4a40-9fe1-fd09e1825332" (UID: "cbaedb6d-c887-4a40-9fe1-fd09e1825332"). InnerVolumeSpecName "kube-api-access-sbcb5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.027503    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sbcb5\" (UniqueName: \"kubernetes.io/projected/cbaedb6d-c887-4a40-9fe1-fd09e1825332-kube-api-access-sbcb5\") on node \"addons-706965\" DevicePath \"\""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.632951    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4trm\" (UniqueName: \"kubernetes.io/projected/b59a81ec-81e4-43e5-be73-e8124168ec83-kube-api-access-v4trm\") pod \"b59a81ec-81e4-43e5-be73-e8124168ec83\" (UID: \"b59a81ec-81e4-43e5-be73-e8124168ec83\") "
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.633009    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b59a81ec-81e4-43e5-be73-e8124168ec83-tmp-dir\") pod \"b59a81ec-81e4-43e5-be73-e8124168ec83\" (UID: \"b59a81ec-81e4-43e5-be73-e8124168ec83\") "
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.633410    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59a81ec-81e4-43e5-be73-e8124168ec83-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b59a81ec-81e4-43e5-be73-e8124168ec83" (UID: "b59a81ec-81e4-43e5-be73-e8124168ec83"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.635298    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59a81ec-81e4-43e5-be73-e8124168ec83-kube-api-access-v4trm" (OuterVolumeSpecName: "kube-api-access-v4trm") pod "b59a81ec-81e4-43e5-be73-e8124168ec83" (UID: "b59a81ec-81e4-43e5-be73-e8124168ec83"). InnerVolumeSpecName "kube-api-access-v4trm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.733920    2362 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b59a81ec-81e4-43e5-be73-e8124168ec83-tmp-dir\") on node \"addons-706965\" DevicePath \"\""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.733957    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v4trm\" (UniqueName: \"kubernetes.io/projected/b59a81ec-81e4-43e5-be73-e8124168ec83-kube-api-access-v4trm\") on node \"addons-706965\" DevicePath \"\""
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.917303    2362 scope.go:117] "RemoveContainer" containerID="9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.949433    2362 scope.go:117] "RemoveContainer" containerID="9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: E0924 18:33:50.950932    2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2" containerID="9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.951122    2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2"} err="failed to get container status \"9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 9a0fb33491edfc6f8e4faf686991a6ff25a4b24a05e424af80a96392a2917fd2"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: E0924 18:33:50.966362    2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="d4a700f4-a01f-47fd-b3f1-cf09744f094d"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.980960    2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b7747e-ac02-4a01-b537-3ccd964580e8" path="/var/lib/kubelet/pods/87b7747e-ac02-4a01-b537-3ccd964580e8/volumes"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.981485    2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59a81ec-81e4-43e5-be73-e8124168ec83" path="/var/lib/kubelet/pods/b59a81ec-81e4-43e5-be73-e8124168ec83/volumes"
	Sep 24 18:33:50 addons-706965 kubelet[2362]: I0924 18:33:50.981848    2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbaedb6d-c887-4a40-9fe1-fd09e1825332" path="/var/lib/kubelet/pods/cbaedb6d-c887-4a40-9fe1-fd09e1825332/volumes"
	
	
	==> storage-provisioner [3dcc4af57f6a] <==
	I0924 18:21:11.903934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:21:11.942090       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:21:11.942167       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:21:11.961446       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:21:11.970086       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-706965_93814731-a966-418d-af8f-2a167e39f192!
	I0924 18:21:11.970807       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99fe901a-455b-4858-bb8b-2f0460495c20", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-706965_93814731-a966-418d-af8f-2a167e39f192 became leader
	I0924 18:21:12.070655       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-706965_93814731-a966-418d-af8f-2a167e39f192!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-706965 -n addons-706965
helpers_test.go:261: (dbg) Run:  kubectl --context addons-706965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zkqpg ingress-nginx-admission-patch-rjbwc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-706965 describe pod busybox ingress-nginx-admission-create-zkqpg ingress-nginx-admission-patch-rjbwc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-706965 describe pod busybox ingress-nginx-admission-create-zkqpg ingress-nginx-admission-patch-rjbwc: exit status 1 (99.36752ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-706965/192.168.49.2
	Start Time:       Tue, 24 Sep 2024 18:24:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rkp5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8rkp5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-706965
	  Normal   Pulling    7m54s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m39s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zkqpg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rjbwc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-706965 describe pod busybox ingress-nginx-admission-create-zkqpg ingress-nginx-admission-patch-rjbwc: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.70s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.43
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.1/json-events 5.58
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
22 TestOffline 86.4
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 221.71
29 TestAddons/serial/Volcano 41.76
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 19.12
35 TestAddons/parallel/InspektorGadget 11.99
36 TestAddons/parallel/MetricsServer 5.98
38 TestAddons/parallel/CSI 48.55
39 TestAddons/parallel/Headlamp 18.65
40 TestAddons/parallel/CloudSpanner 6.51
41 TestAddons/parallel/LocalPath 52.36
42 TestAddons/parallel/NvidiaDevicePlugin 5.43
43 TestAddons/parallel/Yakd 11.67
44 TestAddons/StoppedEnableDisable 6.04
45 TestCertOptions 36.91
46 TestCertExpiration 249.97
47 TestDockerFlags 48.35
48 TestForceSystemdFlag 46.56
49 TestForceSystemdEnv 42.97
55 TestErrorSpam/setup 32.22
56 TestErrorSpam/start 0.72
57 TestErrorSpam/status 0.96
58 TestErrorSpam/pause 1.33
59 TestErrorSpam/unpause 1.44
60 TestErrorSpam/stop 2.05
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 40.29
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 30.2
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
72 TestFunctional/serial/CacheCmd/cache/add_local 0.93
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.3
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
80 TestFunctional/serial/ExtraConfig 43.47
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.17
83 TestFunctional/serial/LogsFileCmd 1.2
84 TestFunctional/serial/InvalidService 4.51
86 TestFunctional/parallel/ConfigCmd 0.47
87 TestFunctional/parallel/DashboardCmd 10.74
88 TestFunctional/parallel/DryRun 0.55
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 0.95
94 TestFunctional/parallel/ServiceCmdConnect 12.65
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 26.84
98 TestFunctional/parallel/SSHCmd 0.75
99 TestFunctional/parallel/CpCmd 1.99
101 TestFunctional/parallel/FileSync 0.34
102 TestFunctional/parallel/CertSync 2.01
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
110 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/MountCmd/any-port 8.13
127 TestFunctional/parallel/ServiceCmd/List 0.61
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
130 TestFunctional/parallel/ServiceCmd/Format 0.46
131 TestFunctional/parallel/ServiceCmd/URL 0.39
132 TestFunctional/parallel/MountCmd/specific-port 2.13
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 1.09
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.26
141 TestFunctional/parallel/ImageCommands/Setup 0.87
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
146 TestFunctional/parallel/DockerEnv/bash 1.31
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 125.07
160 TestMultiControlPlane/serial/DeployApp 8.9
161 TestMultiControlPlane/serial/PingHostFromPods 1.76
162 TestMultiControlPlane/serial/AddWorkerNode 27.01
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
165 TestMultiControlPlane/serial/CopyFile 18.99
166 TestMultiControlPlane/serial/StopSecondaryNode 11.86
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
168 TestMultiControlPlane/serial/RestartSecondaryNode 54.77
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.11
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 190.66
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.38
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
173 TestMultiControlPlane/serial/StopCluster 23.61
174 TestMultiControlPlane/serial/RestartCluster 99.44
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
176 TestMultiControlPlane/serial/AddSecondaryNode 47.02
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
180 TestImageBuild/serial/Setup 30.47
181 TestImageBuild/serial/NormalBuild 1.87
182 TestImageBuild/serial/BuildWithBuildArg 1.01
183 TestImageBuild/serial/BuildWithDockerIgnore 0.92
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.95
188 TestJSONOutput/start/Command 72.65
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.61
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.54
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.99
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.22
213 TestKicCustomNetwork/create_custom_network 32.82
214 TestKicCustomNetwork/use_default_bridge_network 31.85
215 TestKicExistingNetwork 31.7
216 TestKicCustomSubnet 33.14
217 TestKicStaticIP 34.55
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 68.84
222 TestMountStart/serial/StartWithMountFirst 7.58
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 7.65
225 TestMountStart/serial/VerifyMountSecond 0.29
226 TestMountStart/serial/DeleteFirst 1.48
227 TestMountStart/serial/VerifyMountPostDelete 0.27
228 TestMountStart/serial/Stop 1.21
229 TestMountStart/serial/RestartStopped 8.58
230 TestMountStart/serial/VerifyMountPostStop 0.27
233 TestMultiNode/serial/FreshStart2Nodes 86.8
234 TestMultiNode/serial/DeployApp2Nodes 57.11
235 TestMultiNode/serial/PingHostFrom2Pods 1.04
236 TestMultiNode/serial/AddNode 17.23
237 TestMultiNode/serial/MultiNodeLabels 0.11
238 TestMultiNode/serial/ProfileList 0.68
239 TestMultiNode/serial/CopyFile 9.74
240 TestMultiNode/serial/StopNode 2.2
241 TestMultiNode/serial/StartAfterStop 10.91
242 TestMultiNode/serial/RestartKeepsNodes 93.8
243 TestMultiNode/serial/DeleteNode 5.66
244 TestMultiNode/serial/StopMultiNode 21.74
245 TestMultiNode/serial/RestartMultiNode 55.83
246 TestMultiNode/serial/ValidateNameConflict 34
251 TestPreload 143
253 TestScheduledStopUnix 103.98
254 TestSkaffold 122.67
256 TestInsufficientStorage 13.07
257 TestRunningBinaryUpgrade 90.31
259 TestKubernetesUpgrade 395.04
260 TestMissingContainerUpgrade 155
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 42.01
264 TestNoKubernetes/serial/StartWithStopK8s 9.69
265 TestNoKubernetes/serial/Start 7.59
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
267 TestNoKubernetes/serial/ProfileList 1.11
268 TestNoKubernetes/serial/Stop 1.22
269 TestNoKubernetes/serial/StartNoArgs 8.5
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
282 TestStoppedBinaryUpgrade/Setup 0.89
283 TestStoppedBinaryUpgrade/Upgrade 91.68
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
293 TestPause/serial/Start 70.14
294 TestPause/serial/SecondStartNoReconfiguration 29.86
295 TestPause/serial/Pause 0.67
296 TestPause/serial/VerifyStatus 0.32
297 TestPause/serial/Unpause 0.52
298 TestPause/serial/PauseAgain 0.66
299 TestPause/serial/DeletePaused 2.15
300 TestPause/serial/VerifyDeletedResources 13.21
301 TestNetworkPlugins/group/auto/Start 53.09
302 TestNetworkPlugins/group/auto/KubeletFlags 0.32
303 TestNetworkPlugins/group/auto/NetCatPod 11.41
304 TestNetworkPlugins/group/auto/DNS 0.34
305 TestNetworkPlugins/group/auto/Localhost 0.25
306 TestNetworkPlugins/group/auto/HairPin 0.33
307 TestNetworkPlugins/group/kindnet/Start 77.86
308 TestNetworkPlugins/group/calico/Start 76.95
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
311 TestNetworkPlugins/group/kindnet/NetCatPod 11.4
312 TestNetworkPlugins/group/kindnet/DNS 0.24
313 TestNetworkPlugins/group/kindnet/Localhost 0.19
314 TestNetworkPlugins/group/kindnet/HairPin 0.18
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.33
317 TestNetworkPlugins/group/calico/NetCatPod 12.41
318 TestNetworkPlugins/group/custom-flannel/Start 60.72
319 TestNetworkPlugins/group/calico/DNS 0.26
320 TestNetworkPlugins/group/calico/Localhost 0.2
321 TestNetworkPlugins/group/calico/HairPin 0.23
322 TestNetworkPlugins/group/false/Start 86.17
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.43
325 TestNetworkPlugins/group/custom-flannel/DNS 0.19
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
328 TestNetworkPlugins/group/enable-default-cni/Start 80.64
329 TestNetworkPlugins/group/false/KubeletFlags 0.36
330 TestNetworkPlugins/group/false/NetCatPod 12.35
331 TestNetworkPlugins/group/false/DNS 0.21
332 TestNetworkPlugins/group/false/Localhost 0.24
333 TestNetworkPlugins/group/false/HairPin 0.22
334 TestNetworkPlugins/group/flannel/Start 51.52
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.36
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
342 TestNetworkPlugins/group/flannel/NetCatPod 10.36
343 TestNetworkPlugins/group/bridge/Start 53.14
344 TestNetworkPlugins/group/flannel/DNS 0.29
345 TestNetworkPlugins/group/flannel/Localhost 0.29
346 TestNetworkPlugins/group/flannel/HairPin 0.24
347 TestNetworkPlugins/group/kubenet/Start 83.78
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
349 TestNetworkPlugins/group/bridge/NetCatPod 11.32
350 TestNetworkPlugins/group/bridge/DNS 0.28
351 TestNetworkPlugins/group/bridge/Localhost 0.31
352 TestNetworkPlugins/group/bridge/HairPin 0.28
354 TestStartStop/group/old-k8s-version/serial/FirstStart 149.21
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
356 TestNetworkPlugins/group/kubenet/NetCatPod 11.38
357 TestNetworkPlugins/group/kubenet/DNS 0.3
358 TestNetworkPlugins/group/kubenet/Localhost 0.19
359 TestNetworkPlugins/group/kubenet/HairPin 0.18
361 TestStartStop/group/no-preload/serial/FirstStart 54.14
362 TestStartStop/group/no-preload/serial/DeployApp 10.36
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
364 TestStartStop/group/no-preload/serial/Stop 11.03
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/no-preload/serial/SecondStart 267.26
367 TestStartStop/group/old-k8s-version/serial/DeployApp 10.81
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.45
369 TestStartStop/group/old-k8s-version/serial/Stop 11.1
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/old-k8s-version/serial/SecondStart 296.11
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
375 TestStartStop/group/no-preload/serial/Pause 2.84
377 TestStartStop/group/embed-certs/serial/FirstStart 79.55
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
381 TestStartStop/group/old-k8s-version/serial/Pause 2.73
383 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.8
384 TestStartStop/group/embed-certs/serial/DeployApp 9.38
385 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
386 TestStartStop/group/embed-certs/serial/Stop 11.09
387 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
388 TestStartStop/group/embed-certs/serial/SecondStart 266.56
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.08
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.98
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
397 TestStartStop/group/embed-certs/serial/Pause 2.85
399 TestStartStop/group/newest-cni/serial/FirstStart 40.64
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
403 TestStartStop/group/newest-cni/serial/Stop 8.56
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
405 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
406 TestStartStop/group/newest-cni/serial/SecondStart 19.25
407 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
408 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.27
409 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
412 TestStartStop/group/newest-cni/serial/Pause 2.7
x
+
TestDownloadOnly/v1.20.0/json-events (8.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-383349 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-383349 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.427376091s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0924 18:20:01.231487    7514 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0924 18:20:01.231574    7514 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-383349
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-383349: exit status 85 (74.869036ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-383349 | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |          |
	|         | -p download-only-383349        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:19:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:19:52.840545    7519 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:19:52.840684    7519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:52.840696    7519 out.go:358] Setting ErrFile to fd 2...
	I0924 18:19:52.840701    7519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:52.840934    7519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	W0924 18:19:52.841068    7519 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19700-2203/.minikube/config/config.json: open /home/jenkins/minikube-integration/19700-2203/.minikube/config/config.json: no such file or directory
	I0924 18:19:52.841551    7519 out.go:352] Setting JSON to true
	I0924 18:19:52.842312    7519 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":138,"bootTime":1727201855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0924 18:19:52.842384    7519 start.go:139] virtualization:  
	I0924 18:19:52.846161    7519 out.go:97] [download-only-383349] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0924 18:19:52.846349    7519 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:19:52.846404    7519 notify.go:220] Checking for updates...
	I0924 18:19:52.849008    7519 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:19:52.851701    7519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:19:52.854271    7519 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:19:52.857128    7519 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	I0924 18:19:52.859846    7519 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 18:19:52.866457    7519 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:19:52.866685    7519 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:19:52.894837    7519 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:19:52.894946    7519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:19:53.284563    7519 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:19:53.274499027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:19:53.284681    7519 docker.go:318] overlay module found
	I0924 18:19:53.287756    7519 out.go:97] Using the docker driver based on user configuration
	I0924 18:19:53.287788    7519 start.go:297] selected driver: docker
	I0924 18:19:53.287796    7519 start.go:901] validating driver "docker" against <nil>
	I0924 18:19:53.287890    7519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:19:53.343826    7519 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:19:53.334814021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:19:53.344030    7519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:19:53.344315    7519 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 18:19:53.344495    7519 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:19:53.347288    7519 out.go:169] Using Docker driver with root privileges
	I0924 18:19:53.350018    7519 cni.go:84] Creating CNI manager for ""
	I0924 18:19:53.350105    7519 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 18:19:53.350196    7519 start.go:340] cluster config:
	{Name:download-only-383349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-383349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:19:53.352991    7519 out.go:97] Starting "download-only-383349" primary control-plane node in "download-only-383349" cluster
	I0924 18:19:53.353023    7519 cache.go:121] Beginning downloading kic base image for docker with docker
	I0924 18:19:53.355727    7519 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:19:53.355767    7519 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 18:19:53.355928    7519 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:19:53.371396    7519 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:19:53.371589    7519 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:19:53.371695    7519 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:19:53.420639    7519 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 18:19:53.420662    7519 cache.go:56] Caching tarball of preloaded images
	I0924 18:19:53.420826    7519 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 18:19:53.423935    7519 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 18:19:53.423985    7519 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 18:19:53.514253    7519 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-383349 host does not exist
	  To start a cluster, run: "minikube start -p download-only-383349"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-383349
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-168686 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-168686 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.581514993s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0924 18:20:07.289005    7514 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0924 18:20:07.289041    7514 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-168686
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-168686: exit status 85 (63.222816ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-383349 | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p download-only-383349        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-383349        | download-only-383349 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | -o=json --download-only        | download-only-168686 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | -p download-only-168686        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:01
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:01.751460    7715 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:01.751602    7715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:01.751639    7715 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:01.751652    7715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:01.751928    7715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:20:01.752367    7715 out.go:352] Setting JSON to true
	I0924 18:20:01.753237    7715 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":147,"bootTime":1727201855,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0924 18:20:01.753334    7715 start.go:139] virtualization:  
	I0924 18:20:01.755771    7715 out.go:97] [download-only-168686] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:20:01.755994    7715 notify.go:220] Checking for updates...
	I0924 18:20:01.758127    7715 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:20:01.760411    7715 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:01.762910    7715 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:20:01.764546    7715 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	I0924 18:20:01.765996    7715 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 18:20:01.769832    7715 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:20:01.770315    7715 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:01.801337    7715 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:20:01.801455    7715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:20:01.867561    7715 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-24 18:20:01.857345989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:20:01.867690    7715 docker.go:318] overlay module found
	I0924 18:20:01.869442    7715 out.go:97] Using the docker driver based on user configuration
	I0924 18:20:01.869479    7715 start.go:297] selected driver: docker
	I0924 18:20:01.869488    7715 start.go:901] validating driver "docker" against <nil>
	I0924 18:20:01.869619    7715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:20:01.932285    7715 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-24 18:20:01.919764038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:20:01.932519    7715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:01.932879    7715 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 18:20:01.933072    7715 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:20:01.935098    7715 out.go:169] Using Docker driver with root privileges
	I0924 18:20:01.936750    7715 cni.go:84] Creating CNI manager for ""
	I0924 18:20:01.936834    7715 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:01.936855    7715 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:01.936953    7715 start.go:340] cluster config:
	{Name:download-only-168686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-168686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:01.939050    7715 out.go:97] Starting "download-only-168686" primary control-plane node in "download-only-168686" cluster
	I0924 18:20:01.939086    7715 cache.go:121] Beginning downloading kic base image for docker with docker
	I0924 18:20:01.940343    7715 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 18:20:01.940389    7715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:01.940463    7715 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 18:20:01.960554    7715 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 18:20:01.960694    7715 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 18:20:01.960721    7715 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 18:20:01.960730    7715 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 18:20:01.960750    7715 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 18:20:01.989610    7715 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 18:20:01.989643    7715 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:01.989818    7715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:01.991695    7715 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0924 18:20:01.991728    7715 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0924 18:20:02.094682    7715 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 18:20:05.866253    7715 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0924 18:20:05.866356    7715 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19700-2203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0924 18:20:06.613754    7715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 18:20:06.614137    7715 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/download-only-168686/config.json ...
	I0924 18:20:06.614171    7715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/download-only-168686/config.json: {Name:mka621e74381da7c863e52d08dc103f0ba17f7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:06.614357    7715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:06.614521    7715 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19700-2203/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-168686 host does not exist
	  To start a cluster, run: "minikube start -p download-only-168686"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-168686
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 18:20:08.479549    7514 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-168589 --alsologtostderr --binary-mirror http://127.0.0.1:37169 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-168589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-168589
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (86.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-209300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-209300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m24.09233618s)
helpers_test.go:175: Cleaning up "offline-docker-209300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-209300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-209300: (2.311685882s)
--- PASS: TestOffline (86.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-706965
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-706965: exit status 85 (59.671207ms)

                                                
                                                
-- stdout --
	* Profile "addons-706965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-706965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-706965
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-706965: exit status 85 (74.317039ms)

                                                
                                                
-- stdout --
	* Profile "addons-706965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-706965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-706965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-706965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.708179499s)
--- PASS: TestAddons/Setup (221.71s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 60.607947ms
addons_test.go:843: volcano-admission stabilized in 60.750592ms
addons_test.go:835: volcano-scheduler stabilized in 61.416079ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-wld4m" [d4cfceaf-5ea8-49ef-89ea-4e3e6affcec0] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003437242s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-8zmxc" [4b4fe926-a273-474b-92ed-ba579e093dd6] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003572478s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-b66fr" [500bd456-2b22-47dd-97b6-c940f312c4dd] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003986455s
addons_test.go:870: (dbg) Run:  kubectl --context addons-706965 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-706965 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-706965 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [bd20418c-869b-4a85-970c-d16be8f76b7c] Pending
helpers_test.go:344: "test-job-nginx-0" [bd20418c-869b-4a85-970c-d16be8f76b7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [bd20418c-869b-4a85-970c-d16be8f76b7c] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.016055713s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable volcano --alsologtostderr -v=1: (11.082278147s)
--- PASS: TestAddons/serial/Volcano (41.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-706965 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-706965 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-706965 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-706965 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-706965 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [11d31473-5bd6-4168-a77b-a5f2f5c365b9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [11d31473-5bd6-4168-a77b-a5f2f5c365b9] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003965072s
I0924 18:34:00.559887    7514 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-706965 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable ingress-dns --alsologtostderr -v=1: (1.509427093s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable ingress --alsologtostderr -v=1: (7.751341157s)
--- PASS: TestAddons/parallel/Ingress (19.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-249vh" [b8be5f71-edab-46f2-980c-a89d88c36b67] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004349701s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-706965
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-706965: (5.985591565s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.636798ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h5hgc" [b59a81ec-81e4-43e5-be73-e8124168ec83] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004196526s
addons_test.go:413: (dbg) Run:  kubectl --context addons-706965 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable metrics-server --alsologtostderr -v=1
2024/09/24 18:33:48 [DEBUG] GET http://192.168.49.2:5000
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0924 18:32:54.880583    7514 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0924 18:32:54.886412    7514 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 18:32:54.886656    7514 kapi.go:107] duration metric: took 9.589262ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 9.80134ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-706965 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-706965 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [03048457-e78d-4e9f-b294-44262c6f043d] Pending
helpers_test.go:344: "task-pv-pod" [03048457-e78d-4e9f-b294-44262c6f043d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [03048457-e78d-4e9f-b294-44262c6f043d] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003577847s
addons_test.go:528: (dbg) Run:  kubectl --context addons-706965 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-706965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-706965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-706965 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-706965 delete pod task-pv-pod: (1.435729072s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-706965 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-706965 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-706965 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4c1f6d5-71fe-4013-a7de-c24a4af03f93] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4c1f6d5-71fe-4013-a7de-c24a4af03f93] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4c1f6d5-71fe-4013-a7de-c24a4af03f93] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004312484s
addons_test.go:570: (dbg) Run:  kubectl --context addons-706965 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-706965 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-706965 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.739722846s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-706965 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9pj4m" [4e8fdb2a-927f-4fae-9fe1-15d5e1be8fa5] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-9pj4m" [4e8fdb2a-927f-4fae-9fe1-15d5e1be8fa5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9pj4m" [4e8fdb2a-927f-4fae-9fe1-15d5e1be8fa5] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00442058s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable headlamp --alsologtostderr -v=1: (5.692827512s)
--- PASS: TestAddons/parallel/Headlamp (18.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-svwq9" [b9fe9b99-a39d-4d1f-91f7-3c6140e8ca49] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004081902s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-706965
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-706965 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-706965 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ea9ba7ba-400f-4009-b573-babc293d1410] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ea9ba7ba-400f-4009-b573-babc293d1410] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ea9ba7ba-400f-4009-b573-babc293d1410] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00337186s
addons_test.go:938: (dbg) Run:  kubectl --context addons-706965 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 ssh "cat /opt/local-path-provisioner/pvc-233681aa-275e-488d-b561-47af3e07d774_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-706965 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-706965 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.062867149s)
--- PASS: TestAddons/parallel/LocalPath (52.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h4jpw" [ca8e2bf5-a3a8-45ff-982d-6671ac0cdd2e] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003640178s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-706965
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vhzds" [84d3ef93-7102-47e0-b398-8ea3926d3929] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004035527s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-706965 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-706965 addons disable yakd --alsologtostderr -v=1: (5.668960043s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-706965
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-706965: (5.776117227s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-706965
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-706965
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-706965
--- PASS: TestAddons/StoppedEnableDisable (6.04s)

                                                
                                    
x
+
TestCertOptions (36.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-278824 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-278824 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.190707962s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-278824 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-278824 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-278824 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-278824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-278824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-278824: (2.055285842s)
--- PASS: TestCertOptions (36.91s)

                                                
                                    
x
+
TestCertExpiration (249.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-289632 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-289632 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (36.665595373s)
E0924 19:11:53.917638    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-289632 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0924 19:14:46.069276    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-289632 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (31.009209772s)
helpers_test.go:175: Cleaning up "cert-expiration-289632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-289632
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-289632: (2.291027173s)
--- PASS: TestCertExpiration (249.97s)

                                                
                                    
x
+
TestDockerFlags (48.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-735396 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-735396 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.279466383s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-735396 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-735396 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-735396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-735396
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-735396: (2.335767114s)
--- PASS: TestDockerFlags (48.35s)

                                                
                                    
x
+
TestForceSystemdFlag (46.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-449213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-449213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.134120768s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-449213 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-449213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-449213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-449213: (1.986327163s)
--- PASS: TestForceSystemdFlag (46.56s)

                                                
                                    
x
+
TestForceSystemdEnv (42.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-209350 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-209350 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.504760251s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-209350 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-209350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-209350
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-209350: (2.094463646s)
--- PASS: TestForceSystemdEnv (42.97s)

                                                
                                    
x
+
TestErrorSpam/setup (32.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-062297 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-062297 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-062297 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-062297 --driver=docker  --container-runtime=docker: (32.220221646s)
--- PASS: TestErrorSpam/setup (32.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 stop: (1.8547551s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-062297 --log_dir /tmp/nospam-062297 stop
--- PASS: TestErrorSpam/stop (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19700-2203/.minikube/files/etc/test/nested/copy/7514/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-105990 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.284401133s)
--- PASS: TestFunctional/serial/StartWithProxy (40.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 18:36:23.406596    7514 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-105990 --alsologtostderr -v=8: (30.193173732s)
functional_test.go:663: soft start took 30.195855305s for "functional-105990" cluster.
I0924 18:36:53.600085    7514 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (30.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-105990 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 cache add registry.k8s.io/pause:3.1: (1.198048635s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 cache add registry.k8s.io/pause:3.3: (1.096656205s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-105990 /tmp/TestFunctionalserialCacheCmdcacheadd_local2517501006/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache add minikube-local-cache-test:functional-105990
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache delete minikube-local-cache-test:functional-105990
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-105990
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.834531ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 kubectl -- --context functional-105990 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.30s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-105990 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-105990 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.469083022s)
functional_test.go:761: restart took 43.469186974s for "functional-105990" cluster.
I0924 18:37:43.987972    7514 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (43.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-105990 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 logs: (1.165685071s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 logs --file /tmp/TestFunctionalserialLogsFileCmd406752842/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 logs --file /tmp/TestFunctionalserialLogsFileCmd406752842/001/logs.txt: (1.204201157s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-105990 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-105990
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-105990: exit status 115 (542.723817ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32008 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-105990 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 config get cpus: exit status 14 (93.765749ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 config get cpus: exit status 14 (77.800048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-105990 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-105990 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48427: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-105990 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (231.788983ms)

                                                
                                                
-- stdout --
	* [functional-105990] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:38:24.327893   48112 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:38:24.328272   48112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:38:24.328307   48112 out.go:358] Setting ErrFile to fd 2...
	I0924 18:38:24.328476   48112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:38:24.328771   48112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:38:24.329555   48112 out.go:352] Setting JSON to false
	I0924 18:38:24.331009   48112 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1250,"bootTime":1727201855,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0924 18:38:24.331499   48112 start.go:139] virtualization:  
	I0924 18:38:24.336938   48112 out.go:177] * [functional-105990] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 18:38:24.344482   48112 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:38:24.344545   48112 notify.go:220] Checking for updates...
	I0924 18:38:24.353011   48112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:38:24.358391   48112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:38:24.363537   48112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	I0924 18:38:24.367584   48112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:38:24.371886   48112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:38:24.373910   48112 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:38:24.374590   48112 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:38:24.417062   48112 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:38:24.417235   48112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:38:24.490465   48112 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:38:24.478709043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:38:24.490849   48112 docker.go:318] overlay module found
	I0924 18:38:24.493789   48112 out.go:177] * Using the docker driver based on existing profile
	I0924 18:38:24.495604   48112 start.go:297] selected driver: docker
	I0924 18:38:24.495624   48112 start.go:901] validating driver "docker" against &{Name:functional-105990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-105990 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:38:24.495738   48112 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:38:24.497782   48112 out.go:201] 
	W0924 18:38:24.499513   48112 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 18:38:24.501015   48112 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-105990 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-105990 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (182.715796ms)

                                                
                                                
-- stdout --
	* [functional-105990] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:38:24.142670   48068 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:38:24.142794   48068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:38:24.142820   48068 out.go:358] Setting ErrFile to fd 2...
	I0924 18:38:24.142826   48068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:38:24.143765   48068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:38:24.144145   48068 out.go:352] Setting JSON to false
	I0924 18:38:24.145092   48068 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1250,"bootTime":1727201855,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0924 18:38:24.145233   48068 start.go:139] virtualization:  
	I0924 18:38:24.148645   48068 out.go:177] * [functional-105990] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0924 18:38:24.151564   48068 notify.go:220] Checking for updates...
	I0924 18:38:24.151564   48068 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:38:24.154481   48068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:38:24.157189   48068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	I0924 18:38:24.159779   48068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	I0924 18:38:24.162445   48068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 18:38:24.165061   48068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:38:24.168408   48068 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:38:24.168982   48068 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:38:24.194347   48068 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 18:38:24.194469   48068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:38:24.253875   48068 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 18:38:24.243731153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:38:24.253983   48068 docker.go:318] overlay module found
	I0924 18:38:24.256912   48068 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0924 18:38:24.259602   48068 start.go:297] selected driver: docker
	I0924 18:38:24.259622   48068 start.go:901] validating driver "docker" against &{Name:functional-105990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-105990 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:38:24.259726   48068 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:38:24.263053   48068 out.go:201] 
	W0924 18:38:24.265624   48068 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 18:38:24.268319   48068 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-105990 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-105990 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-k676j" [b71bfec0-85a1-4017-94b7-e6835eeeef13] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-k676j" [b71bfec0-85a1-4017-94b7-e6835eeeef13] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003479932s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30221
functional_test.go:1675: http://192.168.49.2:30221: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-k676j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30221
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [44bbb212-880d-412c-8374-0fea16051278] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003393438s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-105990 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-105990 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-105990 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-105990 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00afa5e0-dd44-4a0e-8ea3-05e52b6964c1] Pending
helpers_test.go:344: "sp-pod" [00afa5e0-dd44-4a0e-8ea3-05e52b6964c1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00afa5e0-dd44-4a0e-8ea3-05e52b6964c1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004091269s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-105990 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-105990 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-105990 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a55ae797-3076-45d2-9313-699b9d3497ac] Pending
helpers_test.go:344: "sp-pod" [a55ae797-3076-45d2-9313-699b9d3497ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a55ae797-3076-45d2-9313-699b9d3497ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00348103s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-105990 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh -n functional-105990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cp functional-105990:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1836505705/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh -n functional-105990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh -n functional-105990 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7514/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /etc/test/nested/copy/7514/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7514.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /etc/ssl/certs/7514.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7514.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /usr/share/ca-certificates/7514.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /etc/ssl/certs/75142.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /usr/share/ca-certificates/75142.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-105990 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh "sudo systemctl is-active crio": exit status 1 (288.119179ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45381: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-105990 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1b6fce2c-e00d-4399-a750-7e47071ffcac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1b6fce2c-e00d-4399-a750-7e47071ffcac] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00365s
I0924 18:38:01.395302    7514 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-105990 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.201.110 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-105990 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-105990 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-105990 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-4xr6g" [a6cac5a0-7ee7-48c4-89d1-c5a99c618762] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-4xr6g" [a6cac5a0-7ee7-48c4-89d1-c5a99c618762] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004449398s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "343.606784ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "61.147537ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "343.5467ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "51.454127ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdany-port3912275492/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727203099809494639" to /tmp/TestFunctionalparallelMountCmdany-port3912275492/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727203099809494639" to /tmp/TestFunctionalparallelMountCmdany-port3912275492/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727203099809494639" to /tmp/TestFunctionalparallelMountCmdany-port3912275492/001/test-1727203099809494639
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (309.766933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:38:20.120365    7514 retry.go:31] will retry after 505.517155ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 24 18:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 24 18:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 24 18:38 test-1727203099809494639
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh cat /mount-9p/test-1727203099809494639
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-105990 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ef9ef065-69c8-47df-a529-8998d1a54d86] Pending
helpers_test.go:344: "busybox-mount" [ef9ef065-69c8-47df-a529-8998d1a54d86] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ef9ef065-69c8-47df-a529-8998d1a54d86] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ef9ef065-69c8-47df-a529-8998d1a54d86] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003882717s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-105990 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdany-port3912275492/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service list -o json
functional_test.go:1494: Took "564.596353ms" to run "out/minikube-linux-arm64 -p functional-105990 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32450
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32450
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdspecific-port2103170570/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (396.068757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:38:28.337859    7514 retry.go:31] will retry after 602.643276ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdspecific-port2103170570/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh "sudo umount -f /mount-9p": exit status 1 (315.100687ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-105990 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdspecific-port2103170570/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T" /mount1: exit status 1 (833.136852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:38:30.911534    7514 retry.go:31] will retry after 521.49789ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-105990 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-105990 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000801964/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 version -o=json --components: (1.085774473s)
--- PASS: TestFunctional/parallel/Version/components (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-105990 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-105990
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-105990
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-105990 image ls --format short --alsologtostderr:
I0924 18:38:41.373504   51621 out.go:345] Setting OutFile to fd 1 ...
I0924 18:38:41.373659   51621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.373665   51621 out.go:358] Setting ErrFile to fd 2...
I0924 18:38:41.373669   51621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.373937   51621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
I0924 18:38:41.374565   51621 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.374689   51621 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.375226   51621 cli_runner.go:164] Run: docker container inspect functional-105990 --format={{.State.Status}}
I0924 18:38:41.414072   51621 ssh_runner.go:195] Run: systemctl --version
I0924 18:38:41.414133   51621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-105990
I0924 18:38:41.436698   51621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/functional-105990/id_rsa Username:docker}
I0924 18:38:41.529762   51621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-105990 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-105990 | ce2d2cda2d858 | 4.78MB |
| docker.io/library/minikube-local-cache-test | functional-105990 | 1e270dfc94dcf | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-105990 image ls --format table --alsologtostderr:
I0924 18:38:41.906716   51782 out.go:345] Setting OutFile to fd 1 ...
I0924 18:38:41.906911   51782 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.906931   51782 out.go:358] Setting ErrFile to fd 2...
I0924 18:38:41.906949   51782 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.907204   51782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
I0924 18:38:41.907841   51782 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.908007   51782 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.911064   51782 cli_runner.go:164] Run: docker container inspect functional-105990 --format={{.State.Status}}
I0924 18:38:41.930874   51782 ssh_runner.go:195] Run: systemctl --version
I0924 18:38:41.930926   51782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-105990
I0924 18:38:41.954719   51782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/functional-105990/id_rsa Username:docker}
I0924 18:38:42.050092   51782 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-105990 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed1
4e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-105990"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns
/coredns:v1.11.3"],"size":"60200000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"1e270dfc94dcf3aba0b36d1f57b5164c9f969d1cb812308b4935491074e7f8e7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-105990"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"siz
e":"244000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-105990 image ls --format json --alsologtostderr:
I0924 18:38:41.669626   51697 out.go:345] Setting OutFile to fd 1 ...
I0924 18:38:41.669850   51697 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.669860   51697 out.go:358] Setting ErrFile to fd 2...
I0924 18:38:41.669866   51697 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.670155   51697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
I0924 18:38:41.670813   51697 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.671467   51697 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.672041   51697 cli_runner.go:164] Run: docker container inspect functional-105990 --format={{.State.Status}}
I0924 18:38:41.692181   51697 ssh_runner.go:195] Run: systemctl --version
I0924 18:38:41.692235   51697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-105990
I0924 18:38:41.717434   51697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/functional-105990/id_rsa Username:docker}
I0924 18:38:41.809484   51697 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-105990 image ls --format yaml --alsologtostderr:
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 1e270dfc94dcf3aba0b36d1f57b5164c9f969d1cb812308b4935491074e7f8e7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-105990
size: "30"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-105990
size: "4780000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-105990 image ls --format yaml --alsologtostderr:
I0924 18:38:41.415574   51629 out.go:345] Setting OutFile to fd 1 ...
I0924 18:38:41.415692   51629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.415707   51629 out.go:358] Setting ErrFile to fd 2...
I0924 18:38:41.415712   51629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.415975   51629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
I0924 18:38:41.416650   51629 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.416761   51629 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.417279   51629 cli_runner.go:164] Run: docker container inspect functional-105990 --format={{.State.Status}}
I0924 18:38:41.435006   51629 ssh_runner.go:195] Run: systemctl --version
I0924 18:38:41.435067   51629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-105990
I0924 18:38:41.460515   51629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/functional-105990/id_rsa Username:docker}
I0924 18:38:41.564365   51629 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-105990 ssh pgrep buildkitd: exit status 1 (326.439996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image build -t localhost/my-image:functional-105990 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 image build -t localhost/my-image:functional-105990 testdata/build --alsologtostderr: (2.727707245s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-105990 image build -t localhost/my-image:functional-105990 testdata/build --alsologtostderr:
I0924 18:38:41.953782   51789 out.go:345] Setting OutFile to fd 1 ...
I0924 18:38:41.954019   51789 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.954032   51789 out.go:358] Setting ErrFile to fd 2...
I0924 18:38:41.954039   51789 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:38:41.954357   51789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
I0924 18:38:41.955423   51789 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.956012   51789 config.go:182] Loaded profile config "functional-105990": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 18:38:41.956522   51789 cli_runner.go:164] Run: docker container inspect functional-105990 --format={{.State.Status}}
I0924 18:38:41.981178   51789 ssh_runner.go:195] Run: systemctl --version
I0924 18:38:41.981231   51789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-105990
I0924 18:38:42.004350   51789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/functional-105990/id_rsa Username:docker}
I0924 18:38:42.098494   51789 build_images.go:161] Building image from path: /tmp/build.2565438801.tar
I0924 18:38:42.098568   51789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0924 18:38:42.111107   51789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2565438801.tar
I0924 18:38:42.115671   51789 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2565438801.tar: stat -c "%s %y" /var/lib/minikube/build/build.2565438801.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2565438801.tar': No such file or directory
I0924 18:38:42.115709   51789 ssh_runner.go:362] scp /tmp/build.2565438801.tar --> /var/lib/minikube/build/build.2565438801.tar (3072 bytes)
I0924 18:38:42.143629   51789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2565438801
I0924 18:38:42.155021   51789 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2565438801 -xf /var/lib/minikube/build/build.2565438801.tar
I0924 18:38:42.166731   51789 docker.go:360] Building image: /var/lib/minikube/build/build.2565438801
I0924 18:38:42.166821   51789 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-105990 /var/lib/minikube/build/build.2565438801
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e08d6118a99fe9c0bcefcfe30b57d4d1d69e97435c2834b6b757aecb7b580f07 done
#8 naming to localhost/my-image:functional-105990 done
#8 DONE 0.1s
I0924 18:38:44.594211   51789 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-105990 /var/lib/minikube/build/build.2565438801: (2.427364833s)
I0924 18:38:44.594305   51789 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2565438801
I0924 18:38:44.602950   51789 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2565438801.tar
I0924 18:38:44.611291   51789 build_images.go:217] Built localhost/my-image:functional-105990 from /tmp/build.2565438801.tar
I0924 18:38:44.611321   51789 build_images.go:133] succeeded building to: functional-105990
I0924 18:38:44.611327   51789 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-105990
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image load --daemon kicbase/echo-server:functional-105990 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-105990 image load --daemon kicbase/echo-server:functional-105990 --alsologtostderr: (1.146502488s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image load --daemon kicbase/echo-server:functional-105990 --alsologtostderr
2024/09/24 18:38:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-105990
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image load --daemon kicbase/echo-server:functional-105990 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image save kicbase/echo-server:functional-105990 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-105990 docker-env) && out/minikube-linux-arm64 status -p functional-105990"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-105990 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image rm kicbase/echo-server:functional-105990 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-105990
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-105990 image save --daemon kicbase/echo-server:functional-105990 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-105990
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-105990
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-105990
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-105990
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-351700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0924 18:38:50.853044    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:50.859431    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:50.870832    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:50.892191    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:50.933580    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:51.015092    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:51.176667    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:51.498428    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:52.140351    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:53.421730    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:55.983983    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:39:01.106312    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:39:11.347623    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:39:31.829039    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:40:12.790824    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-351700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m4.249407199s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (125.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-351700 -- rollout status deployment/busybox: (5.495555163s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-46lt9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-d6x9w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-hn2ln -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-46lt9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-d6x9w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-hn2ln -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-46lt9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-d6x9w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-hn2ln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-46lt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-46lt9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-d6x9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-d6x9w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-hn2ln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-351700 -- exec busybox-7dff88458-hn2ln -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-351700 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-351700 -v=7 --alsologtostderr: (25.98493958s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr: (1.027003094s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-351700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.029832457s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp testdata/cp-test.txt ha-351700:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile673499339/001/cp-test_ha-351700.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700:/home/docker/cp-test.txt ha-351700-m02:/home/docker/cp-test_ha-351700_ha-351700-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test_ha-351700_ha-351700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700:/home/docker/cp-test.txt ha-351700-m03:/home/docker/cp-test_ha-351700_ha-351700-m03.txt
E0924 18:41:34.712858    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test_ha-351700_ha-351700-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700:/home/docker/cp-test.txt ha-351700-m04:/home/docker/cp-test_ha-351700_ha-351700-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test_ha-351700_ha-351700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp testdata/cp-test.txt ha-351700-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile673499339/001/cp-test_ha-351700-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m02:/home/docker/cp-test.txt ha-351700:/home/docker/cp-test_ha-351700-m02_ha-351700.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test_ha-351700-m02_ha-351700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m02:/home/docker/cp-test.txt ha-351700-m03:/home/docker/cp-test_ha-351700-m02_ha-351700-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test_ha-351700-m02_ha-351700-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m02:/home/docker/cp-test.txt ha-351700-m04:/home/docker/cp-test_ha-351700-m02_ha-351700-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test_ha-351700-m02_ha-351700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp testdata/cp-test.txt ha-351700-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile673499339/001/cp-test_ha-351700-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m03:/home/docker/cp-test.txt ha-351700:/home/docker/cp-test_ha-351700-m03_ha-351700.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test_ha-351700-m03_ha-351700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m03:/home/docker/cp-test.txt ha-351700-m02:/home/docker/cp-test_ha-351700-m03_ha-351700-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test_ha-351700-m03_ha-351700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m03:/home/docker/cp-test.txt ha-351700-m04:/home/docker/cp-test_ha-351700-m03_ha-351700-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test_ha-351700-m03_ha-351700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp testdata/cp-test.txt ha-351700-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile673499339/001/cp-test_ha-351700-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m04:/home/docker/cp-test.txt ha-351700:/home/docker/cp-test_ha-351700-m04_ha-351700.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700 "sudo cat /home/docker/cp-test_ha-351700-m04_ha-351700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m04:/home/docker/cp-test.txt ha-351700-m02:/home/docker/cp-test_ha-351700-m04_ha-351700-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m02 "sudo cat /home/docker/cp-test_ha-351700-m04_ha-351700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 cp ha-351700-m04:/home/docker/cp-test.txt ha-351700-m03:/home/docker/cp-test_ha-351700-m04_ha-351700-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 ssh -n ha-351700-m03 "sudo cat /home/docker/cp-test_ha-351700-m04_ha-351700-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 node stop m02 -v=7 --alsologtostderr: (11.067303289s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr: exit status 7 (794.257752ms)

                                                
                                                
-- stdout --
	ha-351700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351700-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351700-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:42:01.468477   74222 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:42:01.468781   74222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:42:01.468810   74222 out.go:358] Setting ErrFile to fd 2...
	I0924 18:42:01.468829   74222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:42:01.469519   74222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:42:01.469789   74222 out.go:352] Setting JSON to false
	I0924 18:42:01.469847   74222 mustload.go:65] Loading cluster: ha-351700
	I0924 18:42:01.469897   74222 notify.go:220] Checking for updates...
	I0924 18:42:01.470393   74222 config.go:182] Loaded profile config "ha-351700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:42:01.470414   74222 status.go:174] checking status of ha-351700 ...
	I0924 18:42:01.471309   74222 cli_runner.go:164] Run: docker container inspect ha-351700 --format={{.State.Status}}
	I0924 18:42:01.494329   74222 status.go:364] ha-351700 host status = "Running" (err=<nil>)
	I0924 18:42:01.494354   74222 host.go:66] Checking if "ha-351700" exists ...
	I0924 18:42:01.494712   74222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351700
	I0924 18:42:01.530775   74222 host.go:66] Checking if "ha-351700" exists ...
	I0924 18:42:01.531104   74222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:42:01.531162   74222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351700
	I0924 18:42:01.553235   74222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/ha-351700/id_rsa Username:docker}
	I0924 18:42:01.651389   74222 ssh_runner.go:195] Run: systemctl --version
	I0924 18:42:01.656487   74222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:01.671083   74222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:42:01.744748   74222 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-24 18:42:01.732268298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:42:01.745416   74222 kubeconfig.go:125] found "ha-351700" server: "https://192.168.49.254:8443"
	I0924 18:42:01.745452   74222 api_server.go:166] Checking apiserver status ...
	I0924 18:42:01.745502   74222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:42:01.760898   74222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2366/cgroup
	I0924 18:42:01.772688   74222 api_server.go:182] apiserver freezer: "9:freezer:/docker/45eec344a576184da00e0d80da6ac5b572c47643053bf8c208de10116a2bb99a/kubepods/burstable/pod7fa5ff978a309cc1b7a6da8ff53350ea/a4c2ce9bd8dd10bf914f4af0657e45bf6063121aa23068b48eefc240dc30577f"
	I0924 18:42:01.772763   74222 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/45eec344a576184da00e0d80da6ac5b572c47643053bf8c208de10116a2bb99a/kubepods/burstable/pod7fa5ff978a309cc1b7a6da8ff53350ea/a4c2ce9bd8dd10bf914f4af0657e45bf6063121aa23068b48eefc240dc30577f/freezer.state
	I0924 18:42:01.782612   74222 api_server.go:204] freezer state: "THAWED"
	I0924 18:42:01.782653   74222 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 18:42:01.790762   74222 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 18:42:01.790792   74222 status.go:456] ha-351700 apiserver status = Running (err=<nil>)
	I0924 18:42:01.790803   74222 status.go:176] ha-351700 status: &{Name:ha-351700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:42:01.790823   74222 status.go:174] checking status of ha-351700-m02 ...
	I0924 18:42:01.791192   74222 cli_runner.go:164] Run: docker container inspect ha-351700-m02 --format={{.State.Status}}
	I0924 18:42:01.816541   74222 status.go:364] ha-351700-m02 host status = "Stopped" (err=<nil>)
	I0924 18:42:01.816570   74222 status.go:377] host is not running, skipping remaining checks
	I0924 18:42:01.816579   74222 status.go:176] ha-351700-m02 status: &{Name:ha-351700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:42:01.816602   74222 status.go:174] checking status of ha-351700-m03 ...
	I0924 18:42:01.816948   74222 cli_runner.go:164] Run: docker container inspect ha-351700-m03 --format={{.State.Status}}
	I0924 18:42:01.833600   74222 status.go:364] ha-351700-m03 host status = "Running" (err=<nil>)
	I0924 18:42:01.833624   74222 host.go:66] Checking if "ha-351700-m03" exists ...
	I0924 18:42:01.833962   74222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351700-m03
	I0924 18:42:01.852386   74222 host.go:66] Checking if "ha-351700-m03" exists ...
	I0924 18:42:01.852729   74222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:42:01.852771   74222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351700-m03
	I0924 18:42:01.871912   74222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/ha-351700-m03/id_rsa Username:docker}
	I0924 18:42:01.967032   74222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:01.981943   74222 kubeconfig.go:125] found "ha-351700" server: "https://192.168.49.254:8443"
	I0924 18:42:01.981974   74222 api_server.go:166] Checking apiserver status ...
	I0924 18:42:01.982018   74222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:42:01.996352   74222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2129/cgroup
	I0924 18:42:02.007196   74222 api_server.go:182] apiserver freezer: "9:freezer:/docker/6920a10cdd16d4b355da8ae71284dd18599faf5cdd88e33c5824d442409242a7/kubepods/burstable/podf5f56a2dbb4f2e966e2a8db4176e8745/0cb19a165d71038abc2e77bc5ff0376086753f4e478153f0cd3526b5aae52ca2"
	I0924 18:42:02.007328   74222 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6920a10cdd16d4b355da8ae71284dd18599faf5cdd88e33c5824d442409242a7/kubepods/burstable/podf5f56a2dbb4f2e966e2a8db4176e8745/0cb19a165d71038abc2e77bc5ff0376086753f4e478153f0cd3526b5aae52ca2/freezer.state
	I0924 18:42:02.029421   74222 api_server.go:204] freezer state: "THAWED"
	I0924 18:42:02.029456   74222 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 18:42:02.037409   74222 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 18:42:02.037454   74222 status.go:456] ha-351700-m03 apiserver status = Running (err=<nil>)
	I0924 18:42:02.037463   74222 status.go:176] ha-351700-m03 status: &{Name:ha-351700-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:42:02.037482   74222 status.go:174] checking status of ha-351700-m04 ...
	I0924 18:42:02.037799   74222 cli_runner.go:164] Run: docker container inspect ha-351700-m04 --format={{.State.Status}}
	I0924 18:42:02.055822   74222 status.go:364] ha-351700-m04 host status = "Running" (err=<nil>)
	I0924 18:42:02.055856   74222 host.go:66] Checking if "ha-351700-m04" exists ...
	I0924 18:42:02.056176   74222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351700-m04
	I0924 18:42:02.079360   74222 host.go:66] Checking if "ha-351700-m04" exists ...
	I0924 18:42:02.079681   74222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:42:02.079730   74222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351700-m04
	I0924 18:42:02.102292   74222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/ha-351700-m04/id_rsa Username:docker}
	I0924 18:42:02.194396   74222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:02.208240   74222 status.go:176] ha-351700-m04 status: &{Name:ha-351700-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 node start m02 -v=7 --alsologtostderr
E0924 18:42:51.724363    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.730729    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.743202    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.765033    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.806494    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.887824    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:52.049283    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:52.370523    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:53.012766    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:54.294987    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 node start m02 -v=7 --alsologtostderr: (53.54310821s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
E0924 18:42:56.857034    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr: (1.081818686s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (54.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.113617954s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (190.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-351700 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-351700 -v=7 --alsologtostderr
E0924 18:43:01.979285    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:43:12.220679    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:43:32.702073    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-351700 -v=7 --alsologtostderr: (34.4128618s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-351700 --wait=true -v=7 --alsologtostderr
E0924 18:43:50.849382    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:13.664309    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:18.554185    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:45:35.586614    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-351700 --wait=true -v=7 --alsologtostderr: (2m36.078120095s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-351700
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (190.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 node delete m03 -v=7 --alsologtostderr: (10.446102154s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (23.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-351700 stop -v=7 --alsologtostderr: (23.470956222s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr: exit status 7 (140.798528ms)

                                                
                                                
-- stdout --
	ha-351700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351700-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:46:45.178269  101108 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:46:45.178487  101108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:46:45.178495  101108 out.go:358] Setting ErrFile to fd 2...
	I0924 18:46:45.178501  101108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:46:45.178771  101108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:46:45.178995  101108 out.go:352] Setting JSON to false
	I0924 18:46:45.179030  101108 mustload.go:65] Loading cluster: ha-351700
	I0924 18:46:45.179198  101108 notify.go:220] Checking for updates...
	I0924 18:46:45.179647  101108 config.go:182] Loaded profile config "ha-351700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:46:45.179684  101108 status.go:174] checking status of ha-351700 ...
	I0924 18:46:45.180336  101108 cli_runner.go:164] Run: docker container inspect ha-351700 --format={{.State.Status}}
	I0924 18:46:45.203325  101108 status.go:364] ha-351700 host status = "Stopped" (err=<nil>)
	I0924 18:46:45.203352  101108 status.go:377] host is not running, skipping remaining checks
	I0924 18:46:45.203360  101108 status.go:176] ha-351700 status: &{Name:ha-351700 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:46:45.203390  101108 status.go:174] checking status of ha-351700-m02 ...
	I0924 18:46:45.203745  101108 cli_runner.go:164] Run: docker container inspect ha-351700-m02 --format={{.State.Status}}
	I0924 18:46:45.237276  101108 status.go:364] ha-351700-m02 host status = "Stopped" (err=<nil>)
	I0924 18:46:45.237300  101108 status.go:377] host is not running, skipping remaining checks
	I0924 18:46:45.237309  101108 status.go:176] ha-351700-m02 status: &{Name:ha-351700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:46:45.237331  101108 status.go:174] checking status of ha-351700-m04 ...
	I0924 18:46:45.237710  101108 cli_runner.go:164] Run: docker container inspect ha-351700-m04 --format={{.State.Status}}
	I0924 18:46:45.262985  101108 status.go:364] ha-351700-m04 host status = "Stopped" (err=<nil>)
	I0924 18:46:45.263011  101108 status.go:377] host is not running, skipping remaining checks
	I0924 18:46:45.263019  101108 status.go:176] ha-351700-m04 status: &{Name:ha-351700-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (23.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-351700 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0924 18:47:51.723333    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:48:19.428398    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-351700 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.461602979s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-351700 --control-plane -v=7 --alsologtostderr
E0924 18:48:50.850961    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-351700 --control-plane -v=7 --alsologtostderr: (46.038085751s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-351700 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.003663291s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-135145 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-135145 --driver=docker  --container-runtime=docker: (30.469035013s)
--- PASS: TestImageBuild/serial/Setup (30.47s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-135145
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-135145: (1.873878573s)
--- PASS: TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-135145
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-135145: (1.009683222s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-135145
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-135145
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-814122 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-814122 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m12.646594166s)
--- PASS: TestJSONOutput/start/Command (72.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-814122 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-814122 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-814122 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-814122 --output=json --user=testUser: (10.989914371s)
--- PASS: TestJSONOutput/stop/Command (10.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-473539 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-473539 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.145764ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82b13ced-465c-412a-acc3-81177297a9f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-473539] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9863142-7aa5-4e1e-8cb5-73c01049d132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"c749bf3d-81be-4591-a697-0807dbb3af2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8fda880d-cf6f-46f7-8c73-5e0efe17fcdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig"}}
	{"specversion":"1.0","id":"851dad4a-f666-41da-b8a7-da3e12d71b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube"}}
	{"specversion":"1.0","id":"ab7cd595-ad7d-41cf-bba2-3092a2dd787c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6591069d-b303-45c7-aec6-e0b55f064dae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee43689a-68d8-401e-9970-3e2934e5561f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-473539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-473539
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-453751 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-453751 --network=: (30.676996405s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-453751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-453751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-453751: (2.12263209s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-502360 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-502360 --network=bridge: (29.828555715s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-502360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-502360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-502360: (1.999125549s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.85s)

                                                
                                    
x
+
TestKicExistingNetwork (31.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0924 18:52:31.112121    7514 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0924 18:52:31.127480    7514 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0924 18:52:31.127572    7514 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0924 18:52:31.127592    7514 cli_runner.go:164] Run: docker network inspect existing-network
W0924 18:52:31.143996    7514 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0924 18:52:31.144027    7514 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0924 18:52:31.144049    7514 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0924 18:52:31.144184    7514 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0924 18:52:31.160724    7514 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c3a1f69ad0dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:82:a0:62:5d} reservation:<nil>}
I0924 18:52:31.162323    7514 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a312b0}
I0924 18:52:31.162356    7514 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0924 18:52:31.162427    7514 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0924 18:52:31.237953    7514 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-975500 --network=existing-network
E0924 18:52:51.723337    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-975500 --network=existing-network: (29.482799138s)
helpers_test.go:175: Cleaning up "existing-network-975500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-975500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-975500: (2.056339297s)
I0924 18:53:02.793046    7514 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.70s)

                                                
                                    
x
+
TestKicCustomSubnet (33.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-692666 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-692666 --subnet=192.168.60.0/24: (31.025314095s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-692666 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-692666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-692666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-692666: (2.084182182s)
--- PASS: TestKicCustomSubnet (33.14s)

                                                
                                    
x
+
TestKicStaticIP (34.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-790639 --static-ip=192.168.200.200
E0924 18:53:50.849461    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-790639 --static-ip=192.168.200.200: (32.278760343s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-790639 ip
helpers_test.go:175: Cleaning up "static-ip-790639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-790639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-790639: (2.124536962s)
--- PASS: TestKicStaticIP (34.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-349976 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-349976 --driver=docker  --container-runtime=docker: (31.857163389s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-352527 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-352527 --driver=docker  --container-runtime=docker: (31.321198523s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-349976
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
E0924 18:55:13.915686    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-352527
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-352527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-352527
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-352527: (2.128359514s)
helpers_test.go:175: Cleaning up "first-349976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-349976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-349976: (2.138901182s)
--- PASS: TestMinikubeProfile (68.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-864479 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-864479 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.581543331s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-864479 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-866325 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-866325 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.652538489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866325 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-864479 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-864479 --alsologtostderr -v=5: (1.482560894s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866325 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-866325
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-866325: (1.213908598s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-866325
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-866325: (7.584361948s)
--- PASS: TestMountStart/serial/RestartStopped (8.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866325 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-280005 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-280005 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.185547397s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (57.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-280005 -- rollout status deployment/busybox: (3.842978707s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:19.681398    7514 retry.go:31] will retry after 700.946455ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:20.582529    7514 retry.go:31] will retry after 1.770828955s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:22.514983    7514 retry.go:31] will retry after 1.770752874s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:24.441028    7514 retry.go:31] will retry after 2.511171844s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:27.104202    7514 retry.go:31] will retry after 2.684901953s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:29.928907    7514 retry.go:31] will retry after 6.672589342s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:36.762006    7514 retry.go:31] will retry after 11.786210473s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 18:57:48.823265    7514 retry.go:31] will retry after 21.928546198s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0924 18:57:51.724033    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-dk6k8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-xpdbp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-dk6k8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-xpdbp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-dk6k8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-xpdbp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (57.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-dk6k8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-dk6k8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-xpdbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-280005 -- exec busybox-7dff88458-xpdbp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-280005 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-280005 -v 3 --alsologtostderr: (16.494793695s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-280005 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp testdata/cp-test.txt multinode-280005:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile834826365/001/cp-test_multinode-280005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005:/home/docker/cp-test.txt multinode-280005-m02:/home/docker/cp-test_multinode-280005_multinode-280005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test_multinode-280005_multinode-280005-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005:/home/docker/cp-test.txt multinode-280005-m03:/home/docker/cp-test_multinode-280005_multinode-280005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test_multinode-280005_multinode-280005-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp testdata/cp-test.txt multinode-280005-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile834826365/001/cp-test_multinode-280005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m02:/home/docker/cp-test.txt multinode-280005:/home/docker/cp-test_multinode-280005-m02_multinode-280005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test_multinode-280005-m02_multinode-280005.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m02:/home/docker/cp-test.txt multinode-280005-m03:/home/docker/cp-test_multinode-280005-m02_multinode-280005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test_multinode-280005-m02_multinode-280005-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp testdata/cp-test.txt multinode-280005-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile834826365/001/cp-test_multinode-280005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m03:/home/docker/cp-test.txt multinode-280005:/home/docker/cp-test_multinode-280005-m03_multinode-280005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005 "sudo cat /home/docker/cp-test_multinode-280005-m03_multinode-280005.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 cp multinode-280005-m03:/home/docker/cp-test.txt multinode-280005-m02:/home/docker/cp-test_multinode-280005-m03_multinode-280005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 ssh -n multinode-280005-m02 "sudo cat /home/docker/cp-test_multinode-280005-m03_multinode-280005-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-280005 node stop m03: (1.21374453s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-280005 status: exit status 7 (499.674203ms)

                                                
                                                
-- stdout --
	multinode-280005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-280005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-280005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr: exit status 7 (490.644584ms)

                                                
                                                
-- stdout --
	multinode-280005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-280005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-280005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:58:43.082235  175903 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:58:43.082391  175903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:58:43.082415  175903 out.go:358] Setting ErrFile to fd 2...
	I0924 18:58:43.082422  175903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:58:43.082733  175903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 18:58:43.082988  175903 out.go:352] Setting JSON to false
	I0924 18:58:43.083029  175903 mustload.go:65] Loading cluster: multinode-280005
	I0924 18:58:43.083078  175903 notify.go:220] Checking for updates...
	I0924 18:58:43.083533  175903 config.go:182] Loaded profile config "multinode-280005": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:58:43.083558  175903 status.go:174] checking status of multinode-280005 ...
	I0924 18:58:43.084183  175903 cli_runner.go:164] Run: docker container inspect multinode-280005 --format={{.State.Status}}
	I0924 18:58:43.103808  175903 status.go:364] multinode-280005 host status = "Running" (err=<nil>)
	I0924 18:58:43.103831  175903 host.go:66] Checking if "multinode-280005" exists ...
	I0924 18:58:43.104146  175903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280005
	I0924 18:58:43.126722  175903 host.go:66] Checking if "multinode-280005" exists ...
	I0924 18:58:43.127073  175903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:58:43.127133  175903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280005
	I0924 18:58:43.152328  175903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/multinode-280005/id_rsa Username:docker}
	I0924 18:58:43.242175  175903 ssh_runner.go:195] Run: systemctl --version
	I0924 18:58:43.246474  175903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:58:43.257909  175903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 18:58:43.310192  175903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-24 18:58:43.300681712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 18:58:43.310772  175903 kubeconfig.go:125] found "multinode-280005" server: "https://192.168.67.2:8443"
	I0924 18:58:43.310803  175903 api_server.go:166] Checking apiserver status ...
	I0924 18:58:43.310851  175903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:58:43.322264  175903 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2294/cgroup
	I0924 18:58:43.331314  175903 api_server.go:182] apiserver freezer: "9:freezer:/docker/537a318774c7c979042a6ae8f530317aa56c99c0bc67591bba75daacb3071f5c/kubepods/burstable/pod282d1e591c97b05e159fce5ab0aba490/392b6d72e9adb259f315df64d3d52f2f7ced89d6628d3344c50ba6a6f18b5150"
	I0924 18:58:43.331401  175903 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/537a318774c7c979042a6ae8f530317aa56c99c0bc67591bba75daacb3071f5c/kubepods/burstable/pod282d1e591c97b05e159fce5ab0aba490/392b6d72e9adb259f315df64d3d52f2f7ced89d6628d3344c50ba6a6f18b5150/freezer.state
	I0924 18:58:43.340160  175903 api_server.go:204] freezer state: "THAWED"
	I0924 18:58:43.340190  175903 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0924 18:58:43.347812  175903 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0924 18:58:43.347839  175903 status.go:456] multinode-280005 apiserver status = Running (err=<nil>)
	I0924 18:58:43.347850  175903 status.go:176] multinode-280005 status: &{Name:multinode-280005 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:58:43.347866  175903 status.go:174] checking status of multinode-280005-m02 ...
	I0924 18:58:43.348180  175903 cli_runner.go:164] Run: docker container inspect multinode-280005-m02 --format={{.State.Status}}
	I0924 18:58:43.365376  175903 status.go:364] multinode-280005-m02 host status = "Running" (err=<nil>)
	I0924 18:58:43.365399  175903 host.go:66] Checking if "multinode-280005-m02" exists ...
	I0924 18:58:43.365719  175903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280005-m02
	I0924 18:58:43.383005  175903 host.go:66] Checking if "multinode-280005-m02" exists ...
	I0924 18:58:43.383340  175903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 18:58:43.383387  175903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280005-m02
	I0924 18:58:43.403617  175903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/19700-2203/.minikube/machines/multinode-280005-m02/id_rsa Username:docker}
	I0924 18:58:43.494205  175903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:58:43.505278  175903 status.go:176] multinode-280005-m02 status: &{Name:multinode-280005-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 18:58:43.505311  175903 status.go:174] checking status of multinode-280005-m03 ...
	I0924 18:58:43.505631  175903 cli_runner.go:164] Run: docker container inspect multinode-280005-m03 --format={{.State.Status}}
	I0924 18:58:43.521677  175903 status.go:364] multinode-280005-m03 host status = "Stopped" (err=<nil>)
	I0924 18:58:43.521699  175903 status.go:377] host is not running, skipping remaining checks
	I0924 18:58:43.521706  175903 status.go:176] multinode-280005-m03 status: &{Name:multinode-280005-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 node start m03 -v=7 --alsologtostderr
E0924 18:58:50.848759    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-280005 node start m03 -v=7 --alsologtostderr: (10.164387238s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-280005
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-280005
E0924 18:59:14.791039    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-280005: (22.698438514s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-280005 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-280005 --wait=true -v=8 --alsologtostderr: (1m10.980743533s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-280005
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-280005 node delete m03: (5.003260118s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-280005 stop: (21.572400959s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-280005 status: exit status 7 (80.705631ms)

                                                
                                                
-- stdout --
	multinode-280005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-280005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr: exit status 7 (85.831715ms)

                                                
                                                
-- stdout --
	multinode-280005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-280005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:00:55.585187  189402 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:00:55.585317  189402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:00:55.585327  189402 out.go:358] Setting ErrFile to fd 2...
	I0924 19:00:55.585335  189402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:00:55.585592  189402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-2203/.minikube/bin
	I0924 19:00:55.585764  189402 out.go:352] Setting JSON to false
	I0924 19:00:55.585790  189402 mustload.go:65] Loading cluster: multinode-280005
	I0924 19:00:55.585886  189402 notify.go:220] Checking for updates...
	I0924 19:00:55.586210  189402 config.go:182] Loaded profile config "multinode-280005": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 19:00:55.586226  189402 status.go:174] checking status of multinode-280005 ...
	I0924 19:00:55.586821  189402 cli_runner.go:164] Run: docker container inspect multinode-280005 --format={{.State.Status}}
	I0924 19:00:55.604316  189402 status.go:364] multinode-280005 host status = "Stopped" (err=<nil>)
	I0924 19:00:55.604341  189402 status.go:377] host is not running, skipping remaining checks
	I0924 19:00:55.604348  189402 status.go:176] multinode-280005 status: &{Name:multinode-280005 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:00:55.604372  189402 status.go:174] checking status of multinode-280005-m02 ...
	I0924 19:00:55.604685  189402 cli_runner.go:164] Run: docker container inspect multinode-280005-m02 --format={{.State.Status}}
	I0924 19:00:55.626693  189402 status.go:364] multinode-280005-m02 host status = "Stopped" (err=<nil>)
	I0924 19:00:55.626715  189402 status.go:377] host is not running, skipping remaining checks
	I0924 19:00:55.626723  189402 status.go:176] multinode-280005-m02 status: &{Name:multinode-280005-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-280005 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-280005 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.155655279s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-280005 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-280005
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-280005-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-280005-m02 --driver=docker  --container-runtime=docker: exit status 14 (88.594391ms)

                                                
                                                
-- stdout --
	* [multinode-280005-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-280005-m02' is duplicated with machine name 'multinode-280005-m02' in profile 'multinode-280005'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-280005-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-280005-m03 --driver=docker  --container-runtime=docker: (31.147145724s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-280005
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-280005: exit status 80 (573.651309ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-280005 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-280005-m03 already exists in multinode-280005-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-280005-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-280005-m03: (2.139618411s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.00s)

                                                
                                    
x
+
TestPreload (143s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-560180 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0924 19:02:51.724230    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:03:50.848895    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-560180 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m43.112272006s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-560180 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-560180 image pull gcr.io/k8s-minikube/busybox: (2.154042869s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-560180
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-560180: (10.649227071s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-560180 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-560180 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (24.476046519s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-560180 image list
helpers_test.go:175: Cleaning up "test-preload-560180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-560180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-560180: (2.292282817s)
--- PASS: TestPreload (143.00s)

                                                
                                    
x
+
TestScheduledStopUnix (103.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-889268 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-889268 --memory=2048 --driver=docker  --container-runtime=docker: (30.796571753s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-889268 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-889268 -n scheduled-stop-889268
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-889268 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 19:05:23.767838    7514 retry.go:31] will retry after 134.698µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.768974    7514 retry.go:31] will retry after 109.404µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.769242    7514 retry.go:31] will retry after 177.772µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.770065    7514 retry.go:31] will retry after 309.437µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.771184    7514 retry.go:31] will retry after 272.703µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.772333    7514 retry.go:31] will retry after 820.852µs: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.773460    7514 retry.go:31] will retry after 1.414898ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.775687    7514 retry.go:31] will retry after 1.229371ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.777881    7514 retry.go:31] will retry after 2.448009ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.781110    7514 retry.go:31] will retry after 4.723633ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.786488    7514 retry.go:31] will retry after 4.512072ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.791782    7514 retry.go:31] will retry after 5.551951ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.798026    7514 retry.go:31] will retry after 15.45098ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.814311    7514 retry.go:31] will retry after 16.619093ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.831573    7514 retry.go:31] will retry after 35.75571ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
I0924 19:05:23.867813    7514 retry.go:31] will retry after 53.673894ms: open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/scheduled-stop-889268/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-889268 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-889268 -n scheduled-stop-889268
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-889268
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-889268 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-889268
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-889268: exit status 7 (62.554505ms)

                                                
                                                
-- stdout --
	scheduled-stop-889268
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-889268 -n scheduled-stop-889268
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-889268 -n scheduled-stop-889268: exit status 7 (74.725349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-889268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-889268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-889268: (1.65146942s)
--- PASS: TestScheduledStopUnix (103.98s)

                                                
                                    
x
+
TestSkaffold (122.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe524814920 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-421190 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-421190 --memory=2600 --driver=docker  --container-runtime=docker: (32.986868329s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe524814920 run --minikube-profile skaffold-421190 --kube-context skaffold-421190 --status-check=true --port-forward=false --interactive=false
E0924 19:07:51.723674    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe524814920 run --minikube-profile skaffold-421190 --kube-context skaffold-421190 --status-check=true --port-forward=false --interactive=false: (1m13.388240337s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-85879c98bf-rn48w" [f4ae4647-d4a0-468a-92c6-6f74f429eb15] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004453297s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7cf49f9f97-rpkhc" [8189160c-2a51-459b-9630-4377fbce82d9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.00358018s
helpers_test.go:175: Cleaning up "skaffold-421190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-421190
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-421190: (2.982614152s)
--- PASS: TestSkaffold (122.67s)

                                                
                                    
x
+
TestInsufficientStorage (13.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-849064 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-849064 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.787034656s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6344380-260e-40fb-985c-88837d5d946e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-849064] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fb50f8e-3015-4547-9a51-2175efdd9a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"c9a2a365-0fe2-4381-82f3-3ae6b3e1f88e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e85f81e7-9721-4c15-a06d-c9f4f28bd1bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig"}}
	{"specversion":"1.0","id":"7cdebce8-8d8b-42c3-8f7d-999bf854382e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube"}}
	{"specversion":"1.0","id":"aa581b05-a336-4a00-8af4-a329326d23b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d331f386-45eb-4376-848b-2207c08aecd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9307c82-7987-45cf-aef5-8f813c7f9c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"27ebbe0d-c0d1-4454-942a-b93114429623","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e247f92b-efd1-4da5-8359-cacfb4c8ec8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"54e3d9e7-6524-483a-b665-ac66879f1ccc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ad3d824f-31b5-453d-9d67-115e703291ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-849064\" primary control-plane node in \"insufficient-storage-849064\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"786173ae-60ed-4194-96bf-172c91683c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"64505b43-3b1a-448d-8248-e241701b6f58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3e2605d-a04b-4ee6-b4e4-c425c0a1c0a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-849064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-849064 --output=json --layout=cluster: exit status 7 (323.277386ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-849064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-849064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:08:50.230205  223774 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-849064" does not appear in /home/jenkins/minikube-integration/19700-2203/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-849064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-849064 --output=json --layout=cluster: exit status 7 (275.577346ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-849064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-849064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:08:50.507178  223836 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-849064" does not appear in /home/jenkins/minikube-integration/19700-2203/kubeconfig
	E0924 19:08:50.517191  223836 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/insufficient-storage-849064/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-849064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-849064
E0924 19:08:50.848922    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-849064: (1.678558682s)
--- PASS: TestInsufficientStorage (13.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2405648637 start -p running-upgrade-097943 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2405648637 start -p running-upgrade-097943 --memory=2200 --vm-driver=docker  --container-runtime=docker: (40.171610874s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-097943 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0924 19:17:51.723965    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-097943 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.8881413s)
helpers_test.go:175: Cleaning up "running-upgrade-097943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-097943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-097943: (2.190096118s)
--- PASS: TestRunningBinaryUpgrade (90.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m16.841309526s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-789662
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-789662: (1.272103781s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-789662 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-789662 status --format={{.Host}}: exit status 7 (68.19224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m46.228190255s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-789662 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (103.516081ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-789662] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-789662
	    minikube start -p kubernetes-upgrade-789662 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7896622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-789662 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-789662 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.821103975s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-789662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-789662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-789662: (2.601291519s)
--- PASS: TestKubernetesUpgrade (395.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (155s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.740568600 start -p missing-upgrade-671847 --memory=2200 --driver=docker  --container-runtime=docker
E0924 19:12:51.724175    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.128849    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.136037    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.147446    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.169088    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.210505    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.291878    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.453358    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:24.774899    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:25.417028    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:26.698774    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:29.261328    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:34.383611    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.740568600 start -p missing-upgrade-671847 --memory=2200 --driver=docker  --container-runtime=docker: (1m24.462180983s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-671847
E0924 19:13:44.624996    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:13:50.848815    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-671847: (10.455587592s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-671847
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-671847 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0924 19:14:05.107142    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-671847 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.741037317s)
helpers_test.go:175: Cleaning up "missing-upgrade-671847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-671847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-671847: (2.133672056s)
--- PASS: TestMissingContainerUpgrade (155.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (96.073563ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-241201] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-2203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-2203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241201 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241201 --driver=docker  --container-runtime=docker: (41.614175071s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-241201 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --driver=docker  --container-runtime=docker: (7.347843345s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-241201 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-241201 status -o json: exit status 2 (469.554742ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-241201","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-241201
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-241201: (1.868148136s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241201 --no-kubernetes --driver=docker  --container-runtime=docker: (7.592935396s)
--- PASS: TestNoKubernetes/serial/Start (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-241201 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-241201 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.652075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-241201
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-241201: (1.220546694s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241201 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241201 --driver=docker  --container-runtime=docker: (8.501302617s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-241201 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-241201 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.953142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3120104936 start -p stopped-upgrade-540296 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0924 19:15:54.792418    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3120104936 start -p stopped-upgrade-540296 --memory=2200 --vm-driver=docker  --container-runtime=docker: (46.088624663s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3120104936 -p stopped-upgrade-540296 stop
E0924 19:16:07.991168    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3120104936 -p stopped-upgrade-540296 stop: (10.867478036s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-540296 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-540296 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.721405748s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-540296
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-540296: (1.419427711s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (70.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-421138 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0924 19:18:24.127657    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:18:50.848644    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:18:51.833051    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-421138 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m10.141708653s)
--- PASS: TestPause/serial/Start (70.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-421138 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-421138 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.83800753s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-421138 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-421138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-421138 --output=json --layout=cluster: exit status 2 (315.001195ms)

                                                
                                                
-- stdout --
	{"Name":"pause-421138","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-421138","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-421138 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-421138 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.15s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-421138 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-421138 --alsologtostderr -v=5: (2.147864155s)
--- PASS: TestPause/serial/DeletePaused (2.15s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (13.140172886s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-421138
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-421138: exit status 1 (24.384002ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-421138: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (53.093271341s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-466463 "pgrep -a kubelet"
I0924 19:21:12.840574    7514 config.go:182] Loaded profile config "auto-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kdzwj" [0532ab31-b1c2-4880-beb0-a2ad5a2a34ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kdzwj" [0532ab31-b1c2-4880-beb0-a2ad5a2a34ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009861104s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m17.863232928s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m16.954137899s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kg944" [98b24530-84ef-4a55-a410-8c88e599da31] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004564341s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-466463 "pgrep -a kubelet"
I0924 19:22:50.918616    7514 config.go:182] Loaded profile config "kindnet-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-466463 replace --force -f testdata/netcat-deployment.yaml
I0924 19:22:51.312820    7514 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zp8vz" [1a6f05f7-5f6a-4f1b-b8fb-1c8b6c128f9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 19:22:51.724373    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zp8vz" [1a6f05f7-5f6a-4f1b-b8fb-1c8b6c128f9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004240481s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qb5gk" [a37c849f-935c-4400-a996-d44caa844cee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004413595s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-466463 "pgrep -a kubelet"
I0924 19:23:13.283938    7514 config.go:182] Loaded profile config "calico-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-242ff" [c5c7133c-60d2-46b9-9645-68f41ef159b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-242ff" [c5c7133c-60d2-46b9-9645-68f41ef159b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006354728s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m0.718874842s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (86.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m26.171830671s)
--- PASS: TestNetworkPlugins/group/false/Start (86.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-466463 "pgrep -a kubelet"
I0924 19:24:26.267102    7514 config.go:182] Loaded profile config "custom-flannel-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-drwzh" [7e658fdd-2107-4e0b-a01c-5f71525261b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-drwzh" [7e658fdd-2107-4e0b-a01c-5f71525261b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003510175s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m20.635815765s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-466463 "pgrep -a kubelet"
I0924 19:25:18.843042    7514 config.go:182] Loaded profile config "false-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-466463 replace --force -f testdata/netcat-deployment.yaml
I0924 19:25:19.185089    7514 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xwvxh" [f0c37b29-83e3-433b-9610-c71568454e20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xwvxh" [f0c37b29-83e3-433b-9610-c71568454e20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003732465s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0924 19:26:13.213528    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.219915    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.231269    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.252640    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.294006    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.375402    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.536828    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:13.858296    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:14.500043    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:15.781470    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:26:18.343130    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (51.516026613s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-466463 "pgrep -a kubelet"
I0924 19:26:21.487013    7514 config.go:182] Loaded profile config "enable-default-cni-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-466463 replace --force -f testdata/netcat-deployment.yaml
I0924 19:26:21.825573    7514 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tsrrz" [d2313d6b-05a1-431d-ab40-631d8c1e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 19:26:23.465217    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tsrrz" [d2313d6b-05a1-431d-ab40-631d8c1e2fcc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004301639s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xcm6h" [5fb7184a-14ec-43ce-a41f-f8ec8987f86d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004624922s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-466463 "pgrep -a kubelet"
I0924 19:26:51.238723    7514 config.go:182] Loaded profile config "flannel-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zvzcb" [022e3388-81af-4ee1-b36c-04fcb52cf231] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zvzcb" [022e3388-81af-4ee1-b36c-04fcb52cf231] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003973193s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (53.139610581s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (83.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0924 19:27:35.149876    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.517437    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.523813    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.535393    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.557176    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.598590    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.679950    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:44.841534    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:45.163302    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:45.805433    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:47.087426    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-466463 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m23.78026911s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (83.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-466463 "pgrep -a kubelet"
I0924 19:27:48.272433    7514 config.go:182] Loaded profile config "bridge-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xwdg2" [c9ab575c-b898-4a79-a4bd-db97d9d37bad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 19:27:49.649025    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:27:51.724249    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xwdg2" [c9ab575c-b898-4a79-a4bd-db97d9d37bad] Running
E0924 19:27:54.771664    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004338577s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-368768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0924 19:28:24.127553    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:28:25.495523    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:28:27.460476    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:28:33.919418    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:28:47.941828    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:28:50.848794    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-368768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m29.205515415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-466463 "pgrep -a kubelet"
I0924 19:28:52.457600    7514 config.go:182] Loaded profile config "kubenet-466463": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-466463 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v9xx4" [7b32e254-11b8-489d-8228-bd644bdc47f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 19:28:57.072332    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-v9xx4" [7b32e254-11b8-489d-8228-bd644bdc47f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004041065s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-466463 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-466463 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-316593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:29:26.669078    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.675430    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.687073    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.710587    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.752099    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.836399    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:26.998348    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:27.321352    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:27.964038    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:28.903768    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:29.245752    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:31.807135    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:36.928558    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:47.171837    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:29:47.195142    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:07.653969    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.160373    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.166715    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.178060    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.199370    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.241334    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.323009    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.484469    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:19.806202    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:20.448112    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-316593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (54.135297169s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-316593 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44bc5a02-6f93-4c93-aa4f-b9f9d55df43a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0924 19:30:21.730004    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:24.292445    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [44bc5a02-6f93-4c93-aa4f-b9f9d55df43a] Running
E0924 19:30:28.379517    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:29.414641    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003267978s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-316593 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-316593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-316593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0452508s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-316593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-316593 --alsologtostderr -v=3
E0924 19:30:39.656665    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-316593 --alsologtostderr -v=3: (11.025896522s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-316593 -n no-preload-316593
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-316593 -n no-preload-316593: exit status 7 (76.582284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-316593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-316593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:30:48.616199    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:30:50.825744    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-316593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.894534929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-316593 -n no-preload-316593
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-368768 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d48b7b0b-e3aa-431f-ab1e-0c8afafc71b4] Pending
helpers_test.go:344: "busybox" [d48b7b0b-e3aa-431f-ab1e-0c8afafc71b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d48b7b0b-e3aa-431f-ab1e-0c8afafc71b4] Running
E0924 19:31:00.138052    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004011488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-368768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-368768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-368768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.254216004s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-368768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-368768 --alsologtostderr -v=3
E0924 19:31:13.211814    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-368768 --alsologtostderr -v=3: (11.099773817s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368768 -n old-k8s-version-368768
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368768 -n old-k8s-version-368768: exit status 7 (71.823258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-368768 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (296.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-368768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0924 19:31:21.806319    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:21.812667    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:21.824027    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:21.845400    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:21.886729    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:21.968339    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:22.130393    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:22.451944    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:23.093514    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:24.375185    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:26.936851    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:32.058557    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:40.914564    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:41.100063    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:42.300519    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.826475    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.832893    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.844316    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.865733    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.907147    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:44.988548    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:45.151457    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:45.474218    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:46.116093    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:47.398178    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:49.959456    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:31:55.081535    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:02.782245    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:05.323519    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:10.538331    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:25.805458    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:34.794605    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:43.743681    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:44.517452    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.568947    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.575371    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.586759    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.608249    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.649607    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.731011    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:48.892553    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:49.214775    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:49.856233    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:51.137816    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:51.723912    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:53.699896    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:32:58.822064    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:03.027041    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:06.767665    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:06.952229    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:09.063817    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:12.221720    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:24.127486    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:29.545696    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:34.667646    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:50.849435    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.815274    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.821695    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.833103    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.854460    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.896276    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:52.977657    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:53.139104    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:53.460837    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:54.102709    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:55.384508    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:33:57.946264    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:03.067728    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:05.665280    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:10.507786    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:13.309722    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:26.668954    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:28.689705    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:33.791443    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:34:54.380637    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-368768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (4m55.761758346s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368768 -n old-k8s-version-368768
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (296.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wlsqz" [257b12f3-9da6-41d3-82ca-a6aefbff9659] Running
E0924 19:35:14.753674    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004349085s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wlsqz" [257b12f3-9da6-41d3-82ca-a6aefbff9659] Running
E0924 19:35:19.160815    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003890984s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-316593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-316593 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-316593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-316593 -n no-preload-316593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-316593 -n no-preload-316593: exit status 2 (316.412177ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-316593 -n no-preload-316593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-316593 -n no-preload-316593: exit status 2 (346.597292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-316593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-316593 -n no-preload-316593
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-316593 -n no-preload-316593
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-295681 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:35:32.429224    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:35:46.869125    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-295681 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m19.547704443s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8bh4c" [c38afad1-65fd-4d80-a83a-5b5ab88e18f0] Running
E0924 19:36:13.212261    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005194241s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8bh4c" [c38afad1-65fd-4d80-a83a-5b5ab88e18f0] Running
E0924 19:36:21.807107    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005748095s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-368768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-368768 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-368768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368768 -n old-k8s-version-368768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368768 -n old-k8s-version-368768: exit status 2 (315.251226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368768 -n old-k8s-version-368768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368768 -n old-k8s-version-368768: exit status 2 (329.067522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-368768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368768 -n old-k8s-version-368768
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368768 -n old-k8s-version-368768
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-101049 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:36:36.675817    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:36:44.826632    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-101049 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m15.800870347s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-295681 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7d375f55-d5cc-4d43-b53f-4b19460844a0] Pending
helpers_test.go:344: "busybox" [7d375f55-d5cc-4d43-b53f-4b19460844a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0924 19:36:49.506974    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7d375f55-d5cc-4d43-b53f-4b19460844a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004254929s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-295681 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-295681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-295681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.179074909s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-295681 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-295681 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-295681 --alsologtostderr -v=3: (11.093283139s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-295681 -n embed-certs-295681
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-295681 -n embed-certs-295681: exit status 7 (66.854163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-295681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-295681 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:37:12.530975    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-295681 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.227299246s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-295681 -n embed-certs-295681
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-101049 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35900ccf-17f7-4f15-ae61-71627bb465c9] Pending
E0924 19:37:44.517230    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [35900ccf-17f7-4f15-ae61-71627bb465c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35900ccf-17f7-4f15-ae61-71627bb465c9] Running
E0924 19:37:48.568459    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:37:51.723547    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/functional-105990/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00377852s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-101049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-101049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-101049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032134983s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-101049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-101049 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-101049 --alsologtostderr -v=3: (11.082767571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049: exit status 7 (71.538745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-101049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-101049 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:38:06.951744    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/calico-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:16.272779    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:24.127965    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/skaffold-421190/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:50.848552    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/addons-706965/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:38:52.815120    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:20.517128    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kubenet-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:26.668486    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/custom-flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:19.160242    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/false-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.782143    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.788619    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.800059    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.821567    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.863306    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:20.944722    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:21.106225    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:21.429286    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:22.071258    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:23.352710    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:25.914094    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:31.035518    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:41.276872    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:51.959869    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:51.966246    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:51.977606    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:51.999638    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:52.041282    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:52.122680    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:52.284264    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:52.606017    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:53.248356    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:54.529964    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:57.092271    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:01.758723    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:02.213916    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:12.456111    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:13.212231    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:21.806111    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/enable-default-cni-466463/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:32.937462    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-101049 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.491036769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ngkfp" [704b7026-7e1b-4f5f-93f6-a0d7131732a3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005782711s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ngkfp" [704b7026-7e1b-4f5f-93f6-a0d7131732a3] Running
E0924 19:41:42.720865    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.827178    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/flannel-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003925619s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-295681 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-295681 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-295681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-295681 -n embed-certs-295681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-295681 -n embed-certs-295681: exit status 2 (304.317919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-295681 -n embed-certs-295681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-295681 -n embed-certs-295681: exit status 2 (356.597972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-295681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-295681 -n embed-certs-295681
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-295681 -n embed-certs-295681
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-170533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:42:13.898724    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/old-k8s-version-368768/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-170533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.636368383s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-170533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-170533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.349392757s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7fxwf" [5c582216-28c7-4d8d-b1d7-bc1f2a2bd3f7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00379417s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-170533 --alsologtostderr -v=3
E0924 19:42:36.276072    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/auto-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-170533 --alsologtostderr -v=3: (8.563024374s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7fxwf" [5c582216-28c7-4d8d-b1d7-bc1f2a2bd3f7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003772322s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-101049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-170533 -n newest-cni-170533
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-170533 -n newest-cni-170533: exit status 7 (75.289198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-170533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-170533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 19:42:44.517330    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/kindnet-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-170533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (18.792883142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-170533 -n newest-cni-170533
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-101049 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-101049 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049: exit status 2 (293.577029ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049: exit status 2 (301.051362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-101049 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
E0924 19:42:48.567913    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/bridge-466463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-101049 -n default-k8s-diff-port-101049
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-170533 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-170533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-170533 -n newest-cni-170533
E0924 19:43:04.642754    7514 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/no-preload-316593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-170533 -n newest-cni-170533: exit status 2 (289.940611ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-170533 -n newest-cni-170533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-170533 -n newest-cni-170533: exit status 2 (322.174604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-170533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-170533 -n newest-cni-170533
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-170533 -n newest-cni-170533
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-961334 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-961334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-961334
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-466463 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-466463" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19700-2203/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 19:09:39 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-209300
contexts:
- context:
cluster: offline-docker-209300
extensions:
- extension:
last-update: Tue, 24 Sep 2024 19:09:39 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-209300
name: offline-docker-209300
current-context: offline-docker-209300
kind: Config
preferences: {}
users:
- name: offline-docker-209300
user:
client-certificate: /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/offline-docker-209300/client.crt
client-key: /home/jenkins/minikube-integration/19700-2203/.minikube/profiles/offline-docker-209300/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-466463

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-466463" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466463"

                                                
                                                
----------------------- debugLogs end: cilium-466463 [took: 3.781977285s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-466463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-466463
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-197977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-197977
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard