Test Report: Docker_Linux_docker_arm64 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 74.31
x
+
TestAddons/parallel/Registry (74.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 8.100394ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006709101s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003723867s
addons_test.go:338: (dbg) Run:  kubectl --context addons-835847 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.115643754s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 ip
2024/09/27 00:28:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-835847
helpers_test.go:235: (dbg) docker inspect addons-835847:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328",
	        "Created": "2024-09-27T00:15:19.16308535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:15:19.321446461Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/hostname",
	        "HostsPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/hosts",
	        "LogPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328-json.log",
	        "Name": "/addons-835847",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-835847:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-835847",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347-init/diff:/var/lib/docker/overlay2/3144040d268400c51a492b73fb520261a7f283b4a42ff2b53daf66af92d700ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-835847",
	                "Source": "/var/lib/docker/volumes/addons-835847/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-835847",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-835847",
	                "name.minikube.sigs.k8s.io": "addons-835847",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "597cf295b0e02e1436557b2c044f41661d1e1c07f8606aedce7c5595f9c72f37",
	            "SandboxKey": "/var/run/docker/netns/597cf295b0e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-835847": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8e3c5bf1226b19b8684ecbd9040cf26155bb704544af3f37377df089f6297817",
	                    "EndpointID": "c44f81b040b7b4de068538501ffa0877e510e499f521b4143008d069d76987cd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-835847",
	                        "046d9d4a776e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-835847 -n addons-835847
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 logs -n 25: (1.073686964s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-739605   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-739605              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-739605              | download-only-739605   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only              | download-only-574047   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-574047              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-574047              | download-only-574047   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-739605              | download-only-739605   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-574047              | download-only-574047   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | --download-only -p                   | download-docker-686350 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | download-docker-686350               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-686350            | download-docker-686350 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | --download-only -p                   | binary-mirror-571152   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | binary-mirror-571152                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40555               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-571152              | binary-mirror-571152   | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| addons  | disable dashboard -p                 | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-835847                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-835847                        |                        |         |         |                     |                     |
	| start   | -p addons-835847 --wait=true         | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-835847 addons disable         | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:19 UTC | 27 Sep 24 00:19 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | -p addons-835847                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-835847 addons disable         | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-835847 addons                 | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-835847 addons                 | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-835847 ip                     | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	| addons  | addons-835847 addons disable         | addons-835847          | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:55.582964    8355 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:55.583345    8355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:55.583376    8355 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:55.583396    8355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:55.583677    8355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:14:55.584206    8355 out.go:352] Setting JSON to false
	I0927 00:14:55.584969    8355 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3447,"bootTime":1727392649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0927 00:14:55.585057    8355 start.go:139] virtualization:  
	I0927 00:14:55.588842    8355 out.go:177] * [addons-835847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:14:55.590766    8355 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:14:55.590834    8355 notify.go:220] Checking for updates...
	I0927 00:14:55.593083    8355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:55.594876    8355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:14:55.596759    8355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	I0927 00:14:55.598540    8355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:14:55.600599    8355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:14:55.602506    8355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:55.622917    8355 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:55.623038    8355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:55.689159    8355 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:14:55.67916877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:55.689277    8355 docker.go:318] overlay module found
	I0927 00:14:55.691510    8355 out.go:177] * Using the docker driver based on user configuration
	I0927 00:14:55.693178    8355 start.go:297] selected driver: docker
	I0927 00:14:55.693197    8355 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:55.693223    8355 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:14:55.693868    8355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:55.747195    8355 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:14:55.738114718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:55.747395    8355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:55.747630    8355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:14:55.749390    8355 out.go:177] * Using Docker driver with root privileges
	I0927 00:14:55.750952    8355 cni.go:84] Creating CNI manager for ""
	I0927 00:14:55.751027    8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:14:55.751040    8355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:14:55.751111    8355 start.go:340] cluster config:
	{Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:55.753280    8355 out.go:177] * Starting "addons-835847" primary control-plane node in "addons-835847" cluster
	I0927 00:14:55.755160    8355 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 00:14:55.757060    8355 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:14:55.759333    8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:55.759382    8355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 00:14:55.759394    8355 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:55.759408    8355 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:14:55.759485    8355 preload.go:172] Found /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 00:14:55.759496    8355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 00:14:55.759848    8355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json ...
	I0927 00:14:55.759878    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json: {Name:mk28cc37583ccb48ee2b43c135e040bd4836d4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:14:55.774453    8355 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:14:55.774552    8355 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:14:55.774569    8355 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:14:55.774574    8355 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:14:55.774581    8355 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:14:55.774586    8355 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:15:12.509028    8355 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:15:12.509067    8355 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:15:12.509096    8355 start.go:360] acquireMachinesLock for addons-835847: {Name:mkb615d14eff31a0a732f121850f6b6d555eb931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:12.509213    8355 start.go:364] duration metric: took 95.334µs to acquireMachinesLock for "addons-835847"
	I0927 00:15:12.509244    8355 start.go:93] Provisioning new machine with config: &{Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:12.509322    8355 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:15:12.512025    8355 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:15:12.512275    8355 start.go:159] libmachine.API.Create for "addons-835847" (driver="docker")
	I0927 00:15:12.512317    8355 client.go:168] LocalClient.Create starting
	I0927 00:15:12.512431    8355 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem
	I0927 00:15:12.862261    8355 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem
	I0927 00:15:13.113712    8355 cli_runner.go:164] Run: docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:15:13.129492    8355 cli_runner.go:211] docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:15:13.129588    8355 network_create.go:284] running [docker network inspect addons-835847] to gather additional debugging logs...
	I0927 00:15:13.129611    8355 cli_runner.go:164] Run: docker network inspect addons-835847
	W0927 00:15:13.144407    8355 cli_runner.go:211] docker network inspect addons-835847 returned with exit code 1
	I0927 00:15:13.144438    8355 network_create.go:287] error running [docker network inspect addons-835847]: docker network inspect addons-835847: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-835847 not found
	I0927 00:15:13.144452    8355 network_create.go:289] output of [docker network inspect addons-835847]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-835847 not found
	
	** /stderr **
	I0927 00:15:13.144545    8355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:15:13.159735    8355 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bc5360}
	I0927 00:15:13.159779    8355 network_create.go:124] attempt to create docker network addons-835847 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:15:13.159837    8355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-835847 addons-835847
	I0927 00:15:13.231255    8355 network_create.go:108] docker network addons-835847 192.168.49.0/24 created
	I0927 00:15:13.231298    8355 kic.go:121] calculated static IP "192.168.49.2" for the "addons-835847" container
	I0927 00:15:13.231369    8355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:15:13.245619    8355 cli_runner.go:164] Run: docker volume create addons-835847 --label name.minikube.sigs.k8s.io=addons-835847 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:15:13.264827    8355 oci.go:103] Successfully created a docker volume addons-835847
	I0927 00:15:13.264912    8355 cli_runner.go:164] Run: docker run --rm --name addons-835847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --entrypoint /usr/bin/test -v addons-835847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:15:15.390166    8355 cli_runner.go:217] Completed: docker run --rm --name addons-835847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --entrypoint /usr/bin/test -v addons-835847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.125216019s)
	I0927 00:15:15.390192    8355 oci.go:107] Successfully prepared a docker volume addons-835847
	I0927 00:15:15.390224    8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:15.390243    8355 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:15:15.390310    8355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-835847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:15:19.083766    8355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-835847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.693410155s)
	I0927 00:15:19.083797    8355 kic.go:203] duration metric: took 3.693551141s to extract preloaded images to volume ...
	W0927 00:15:19.083938    8355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:15:19.084087    8355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:15:19.147494    8355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-835847 --name addons-835847 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-835847 --network addons-835847 --ip 192.168.49.2 --volume addons-835847:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:15:19.494651    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Running}}
	I0927 00:15:19.517803    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:19.541883    8355 cli_runner.go:164] Run: docker exec addons-835847 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:15:19.608869    8355 oci.go:144] the created container "addons-835847" has a running status.
	I0927 00:15:19.608901    8355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa...
	I0927 00:15:20.378900    8355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:15:20.399032    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:20.416011    8355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:15:20.416029    8355 kic_runner.go:114] Args: [docker exec --privileged addons-835847 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:15:20.474353    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:20.498624    8355 machine.go:93] provisionDockerMachine start ...
	I0927 00:15:20.498709    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:20.520787    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:20.521036    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:20.521049    8355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:15:20.651255    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-835847
	
	I0927 00:15:20.651275    8355 ubuntu.go:169] provisioning hostname "addons-835847"
	I0927 00:15:20.651335    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:20.671336    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:20.671578    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:20.671598    8355 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-835847 && echo "addons-835847" | sudo tee /etc/hostname
	I0927 00:15:20.815113    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-835847
	
	I0927 00:15:20.815194    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:20.831991    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:20.832270    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:20.832296    8355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-835847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-835847/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-835847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:15:20.963827    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:15:20.963861    8355 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-2273/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-2273/.minikube}
	I0927 00:15:20.963882    8355 ubuntu.go:177] setting up certificates
	I0927 00:15:20.963891    8355 provision.go:84] configureAuth start
	I0927 00:15:20.963950    8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
	I0927 00:15:20.981374    8355 provision.go:143] copyHostCerts
	I0927 00:15:20.981455    8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/ca.pem (1078 bytes)
	I0927 00:15:20.981577    8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/cert.pem (1123 bytes)
	I0927 00:15:20.981642    8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/key.pem (1679 bytes)
	I0927 00:15:20.981694    8355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem org=jenkins.addons-835847 san=[127.0.0.1 192.168.49.2 addons-835847 localhost minikube]
	I0927 00:15:21.603596    8355 provision.go:177] copyRemoteCerts
	I0927 00:15:21.603668    8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:15:21.603714    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:21.620234    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:21.717264    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:15:21.742966    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:15:21.765958    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:15:21.787860    8355 provision.go:87] duration metric: took 823.956346ms to configureAuth
	I0927 00:15:21.787927    8355 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:15:21.788189    8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:21.788272    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:21.804406    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:21.804645    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:21.804664    8355 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 00:15:21.932584    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0927 00:15:21.932603    8355 ubuntu.go:71] root file system type: overlay
	I0927 00:15:21.932730    8355 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 00:15:21.932819    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:21.951301    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:21.951571    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:21.951649    8355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 00:15:22.091606    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 00:15:22.091728    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:22.108509    8355 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:22.108758    8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 00:15:22.108787    8355 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 00:15:22.877780    8355 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-27 00:15:22.085144394 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0927 00:15:22.877813    8355 machine.go:96] duration metric: took 2.37917045s to provisionDockerMachine
	I0927 00:15:22.877825    8355 client.go:171] duration metric: took 10.365498477s to LocalClient.Create
	I0927 00:15:22.877841    8355 start.go:167] duration metric: took 10.365568163s to libmachine.API.Create "addons-835847"
	I0927 00:15:22.877854    8355 start.go:293] postStartSetup for "addons-835847" (driver="docker")
	I0927 00:15:22.877866    8355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:15:22.877944    8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:15:22.878028    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:22.894452    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:22.988926    8355 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:15:22.991902    8355 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:15:22.991947    8355 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:15:22.991959    8355 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:15:22.991974    8355 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:15:22.991988    8355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-2273/.minikube/addons for local assets ...
	I0927 00:15:22.992055    8355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-2273/.minikube/files for local assets ...
	I0927 00:15:22.992105    8355 start.go:296] duration metric: took 114.244293ms for postStartSetup
	I0927 00:15:22.992441    8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
	I0927 00:15:23.008730    8355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json ...
	I0927 00:15:23.009018    8355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:15:23.009079    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:23.027435    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:23.116319    8355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:15:23.120221    8355 start.go:128] duration metric: took 10.61088474s to createHost
	I0927 00:15:23.120242    8355 start.go:83] releasing machines lock for "addons-835847", held for 10.611013935s
	I0927 00:15:23.120305    8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
	I0927 00:15:23.135648    8355 ssh_runner.go:195] Run: cat /version.json
	I0927 00:15:23.135683    8355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:15:23.135698    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:23.135749    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:23.156097    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:23.164186    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:23.376818    8355 ssh_runner.go:195] Run: systemctl --version
	I0927 00:15:23.380864    8355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:15:23.384798    8355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 00:15:23.409268    8355 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:15:23.409353    8355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:15:23.440088    8355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:15:23.440113    8355 start.go:495] detecting cgroup driver to use...
	I0927 00:15:23.440146    8355 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:23.440240    8355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:23.456177    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 00:15:23.465748    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 00:15:23.474925    8355 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 00:15:23.474990    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 00:15:23.484597    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:23.494252    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 00:15:23.504538    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:23.513912    8355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:15:23.522440    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 00:15:23.532036    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 00:15:23.541253    8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 00:15:23.550601    8355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:15:23.558738    8355 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:15:23.558829    8355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:15:23.572102    8355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:15:23.580431    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:23.673704    8355 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 00:15:23.769213    8355 start.go:495] detecting cgroup driver to use...
	I0927 00:15:23.769312    8355 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:23.769394    8355 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 00:15:23.782008    8355 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0927 00:15:23.782122    8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 00:15:23.794312    8355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:23.810771    8355 ssh_runner.go:195] Run: which cri-dockerd
	I0927 00:15:23.817398    8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 00:15:23.828570    8355 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 00:15:23.848148    8355 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 00:15:23.952377    8355 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 00:15:24.058793    8355 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 00:15:24.058967    8355 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 00:15:24.085851    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:24.177468    8355 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 00:15:24.436627    8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 00:15:24.448603    8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:24.460983    8355 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 00:15:24.552134    8355 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 00:15:24.640823    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:24.734979    8355 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 00:15:24.749039    8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:24.759996    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:24.848289    8355 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 00:15:24.928702    8355 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 00:15:24.928861    8355 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 00:15:24.933380    8355 start.go:563] Will wait 60s for crictl version
	I0927 00:15:24.933504    8355 ssh_runner.go:195] Run: which crictl
	I0927 00:15:24.940159    8355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:15:24.975140    8355 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 00:15:24.975253    8355 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:24.998680    8355 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:25.023194    8355 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 00:15:25.023305    8355 cli_runner.go:164] Run: docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:15:25.039606    8355 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:15:25.043444    8355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:25.054897    8355 kubeadm.go:883] updating cluster {Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:15:25.055019    8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:25.055081    8355 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:25.073844    8355 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 00:15:25.073868    8355 docker.go:615] Images already preloaded, skipping extraction
	I0927 00:15:25.073937    8355 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:25.092646    8355 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 00:15:25.092675    8355 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:15:25.092685    8355 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0927 00:15:25.092794    8355 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-835847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:15:25.092863    8355 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 00:15:25.135999    8355 cni.go:84] Creating CNI manager for ""
	I0927 00:15:25.136029    8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:25.136040    8355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:15:25.136060    8355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-835847 NodeName:addons-835847 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:15:25.136233    8355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-835847"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:15:25.136302    8355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:15:25.144782    8355 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:15:25.144850    8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:15:25.156871    8355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 00:15:25.174562    8355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:15:25.192036    8355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0927 00:15:25.209382    8355 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:15:25.212648    8355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:25.223039    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:25.307839    8355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:25.321992    8355 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847 for IP: 192.168.49.2
	I0927 00:15:25.322017    8355 certs.go:194] generating shared ca certs ...
	I0927 00:15:25.322034    8355 certs.go:226] acquiring lock for ca certs: {Name:mk6b469cb21598aa598a7ad76cb0e9fff426f760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:25.322154    8355 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key
	I0927 00:15:25.821501    8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt ...
	I0927 00:15:25.821536    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt: {Name:mk5a0578057d437dd3ec15b1fc2dc320142c3756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:25.821743    8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key ...
	I0927 00:15:25.821759    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key: {Name:mk121f6d140b9ff66f0fb5942b7fb7d03b6270c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:25.821870    8355 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key
	I0927 00:15:26.348212    8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt ...
	I0927 00:15:26.348247    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt: {Name:mkcab7a0550e3ae89f0be7bbad3b91f0d1f678eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.348432    8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key ...
	I0927 00:15:26.348445    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key: {Name:mk8e63bf4e68b1ab3013424d2ba114c292acb726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.348532    8355 certs.go:256] generating profile certs ...
	I0927 00:15:26.348590    8355 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key
	I0927 00:15:26.348609    8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt with IP's: []
	I0927 00:15:26.672714    8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt ...
	I0927 00:15:26.672744    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: {Name:mkbd4cd9b6e96659d61742c652c95d80a48d60e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.672921    8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key ...
	I0927 00:15:26.672933    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key: {Name:mk637c405f925463968a47027001c25855825222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.673019    8355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb
	I0927 00:15:26.673039    8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:15:26.999445    8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb ...
	I0927 00:15:26.999476    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb: {Name:mkaf8fa885a27d87fe23843620326105754da1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.999655    8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb ...
	I0927 00:15:26.999669    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb: {Name:mk90ed01a36b135715259a523b426ac426ca466d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.999744    8355 certs.go:381] copying /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb -> /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt
	I0927 00:15:26.999825    8355 certs.go:385] copying /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb -> /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key
	I0927 00:15:26.999880    8355 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key
	I0927 00:15:26.999901    8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt with IP's: []
	I0927 00:15:27.841185    8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt ...
	I0927 00:15:27.841218    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt: {Name:mk43215f1566b51f2de5f848457a9b34b2a4d67d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:27.841402    8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key ...
	I0927 00:15:27.841414    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key: {Name:mka499bf621e2c6b6397d3b54999e74c6e838c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:27.841619    8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 00:15:27.841660    8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:15:27.841689    8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:15:27.841716    8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem (1679 bytes)
	I0927 00:15:27.842326    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:15:27.865324    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:15:27.891742    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:15:27.914572    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:15:27.937337    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:15:27.959420    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:15:27.982205    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:15:28.004890    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:15:28.028728    8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:15:28.052053    8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:15:28.069770    8355 ssh_runner.go:195] Run: openssl version
	I0927 00:15:28.075085    8355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:15:28.084122    8355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:28.087230    8355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:28.087308    8355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:28.093959    8355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:15:28.102764    8355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:15:28.105702    8355 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:15:28.105749    8355 kubeadm.go:392] StartCluster: {Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:28.105873    8355 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 00:15:28.123626    8355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:15:28.133421    8355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:15:28.142011    8355 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:15:28.142080    8355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:15:28.150221    8355 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:15:28.150241    8355 kubeadm.go:157] found existing configuration files:
	
	I0927 00:15:28.150292    8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:15:28.158417    8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:15:28.158499    8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:15:28.166662    8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:15:28.174851    8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:15:28.174916    8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:15:28.183074    8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:15:28.191328    8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:15:28.191411    8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:15:28.199895    8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:15:28.208016    8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:15:28.208091    8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:15:28.215691    8355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:15:28.258071    8355 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:15:28.258140    8355 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:15:28.281314    8355 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:15:28.281384    8355 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 00:15:28.281425    8355 kubeadm.go:310] OS: Linux
	I0927 00:15:28.281475    8355 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:15:28.281527    8355 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:15:28.281577    8355 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:15:28.281628    8355 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:15:28.281680    8355 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:15:28.281738    8355 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:15:28.281787    8355 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:15:28.281838    8355 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:15:28.281888    8355 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:15:28.352824    8355 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:15:28.352938    8355 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:15:28.353035    8355 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:15:28.365127    8355 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:15:28.369171    8355 out.go:235]   - Generating certificates and keys ...
	I0927 00:15:28.369274    8355 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:15:28.369342    8355 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:15:28.791729    8355 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:15:29.737295    8355 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:15:30.229025    8355 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:15:30.500175    8355 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:15:30.765590    8355 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:15:30.765913    8355 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-835847 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:15:31.778248    8355 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:15:31.778522    8355 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-835847 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:15:32.205672    8355 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:15:32.925869    8355 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:15:33.313122    8355 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:15:33.313417    8355 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:15:33.477185    8355 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:15:34.153424    8355 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:15:34.654785    8355 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:15:34.844738    8355 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:15:35.408980    8355 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:15:35.409698    8355 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:15:35.413150    8355 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:15:35.415475    8355 out.go:235]   - Booting up control plane ...
	I0927 00:15:35.415597    8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:15:35.415721    8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:15:35.416997    8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:15:35.430520    8355 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:15:35.436671    8355 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:15:35.436726    8355 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:15:35.534311    8355 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:15:35.534436    8355 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:15:36.535815    8355 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605638s
	I0927 00:15:36.535906    8355 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:15:43.036878    8355 kubeadm.go:310] [api-check] The API server is healthy after 6.501197753s
	I0927 00:15:43.058665    8355 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:15:43.073642    8355 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:15:43.099957    8355 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:15:43.100199    8355 kubeadm.go:310] [mark-control-plane] Marking the node addons-835847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:15:43.109954    8355 kubeadm.go:310] [bootstrap-token] Using token: s902bs.tf3jjmvfz7uwqdvh
	I0927 00:15:43.111854    8355 out.go:235]   - Configuring RBAC rules ...
	I0927 00:15:43.112000    8355 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:15:43.116635    8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:15:43.125976    8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:15:43.129288    8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:15:43.132730    8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:15:43.135965    8355 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:15:43.444854    8355 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:15:43.872202    8355 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:15:44.444681    8355 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:15:44.445746    8355 kubeadm.go:310] 
	I0927 00:15:44.445815    8355 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:15:44.445822    8355 kubeadm.go:310] 
	I0927 00:15:44.445897    8355 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:15:44.445902    8355 kubeadm.go:310] 
	I0927 00:15:44.445926    8355 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:15:44.445984    8355 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:15:44.446033    8355 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:15:44.446038    8355 kubeadm.go:310] 
	I0927 00:15:44.446091    8355 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:15:44.446095    8355 kubeadm.go:310] 
	I0927 00:15:44.446141    8355 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:15:44.446146    8355 kubeadm.go:310] 
	I0927 00:15:44.446197    8355 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:15:44.446271    8355 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:15:44.446338    8355 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:15:44.446342    8355 kubeadm.go:310] 
	I0927 00:15:44.446424    8355 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:15:44.446500    8355 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:15:44.446504    8355 kubeadm.go:310] 
	I0927 00:15:44.446587    8355 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s902bs.tf3jjmvfz7uwqdvh \
	I0927 00:15:44.446691    8355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd6a97da01c3c67156170d76cada7ca61301e3d64f415f6dbfb2beeb22c641c2 \
	I0927 00:15:44.446711    8355 kubeadm.go:310] 	--control-plane 
	I0927 00:15:44.446715    8355 kubeadm.go:310] 
	I0927 00:15:44.446799    8355 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:15:44.446805    8355 kubeadm.go:310] 
	I0927 00:15:44.447099    8355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s902bs.tf3jjmvfz7uwqdvh \
	I0927 00:15:44.447219    8355 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd6a97da01c3c67156170d76cada7ca61301e3d64f415f6dbfb2beeb22c641c2 
	I0927 00:15:44.450712    8355 kubeadm.go:310] W0927 00:15:28.253273    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:44.451015    8355 kubeadm.go:310] W0927 00:15:28.254096    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:44.451231    8355 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 00:15:44.451339    8355 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:15:44.451360    8355 cni.go:84] Creating CNI manager for ""
	I0927 00:15:44.451379    8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:44.455522    8355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:15:44.457526    8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:15:44.465851    8355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:15:44.484459    8355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:15:44.484584    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:44.484658    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-835847 minikube.k8s.io/updated_at=2024_09_27T00_15_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-835847 minikube.k8s.io/primary=true
	I0927 00:15:44.620967    8355 ops.go:34] apiserver oom_adj: -16
	I0927 00:15:44.623033    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:45.123120    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:45.623700    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:46.123549    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:46.623991    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:47.123671    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:47.623809    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:48.124006    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:48.623911    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:49.123090    8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:49.217489    8355 kubeadm.go:1113] duration metric: took 4.732949994s to wait for elevateKubeSystemPrivileges
	I0927 00:15:49.217519    8355 kubeadm.go:394] duration metric: took 21.111773778s to StartCluster
	I0927 00:15:49.217536    8355 settings.go:142] acquiring lock: {Name:mk9e86eff3579e8eaf68f36246430af37e38da50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:49.217645    8355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:15:49.218091    8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/kubeconfig: {Name:mk73f0586b74afb137afdc7b8bae894b77929339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:49.218303    8355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:49.218432    8355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:15:49.218671    8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:49.218707    8355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:15:49.218782    8355 addons.go:69] Setting yakd=true in profile "addons-835847"
	I0927 00:15:49.218802    8355 addons.go:234] Setting addon yakd=true in "addons-835847"
	I0927 00:15:49.218824    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.219313    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.219655    8355 addons.go:69] Setting metrics-server=true in profile "addons-835847"
	I0927 00:15:49.219679    8355 addons.go:234] Setting addon metrics-server=true in "addons-835847"
	I0927 00:15:49.219706    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.220214    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222270    8355 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-835847"
	I0927 00:15:49.222304    8355 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-835847"
	I0927 00:15:49.222353    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.222359    8355 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-835847"
	I0927 00:15:49.222426    8355 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-835847"
	I0927 00:15:49.222472    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.222351    8355 addons.go:69] Setting cloud-spanner=true in profile "addons-835847"
	I0927 00:15:49.224263    8355 addons.go:234] Setting addon cloud-spanner=true in "addons-835847"
	I0927 00:15:49.224293    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.224716    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222521    8355 addons.go:69] Setting registry=true in profile "addons-835847"
	I0927 00:15:49.225245    8355 addons.go:234] Setting addon registry=true in "addons-835847"
	I0927 00:15:49.225271    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.225681    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.229988    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222526    8355 addons.go:69] Setting storage-provisioner=true in profile "addons-835847"
	I0927 00:15:49.231247    8355 addons.go:234] Setting addon storage-provisioner=true in "addons-835847"
	I0927 00:15:49.231321    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.231807    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222531    8355 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-835847"
	I0927 00:15:49.243437    8355 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-835847"
	I0927 00:15:49.243825    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222535    8355 addons.go:69] Setting volcano=true in profile "addons-835847"
	I0927 00:15:49.247991    8355 addons.go:234] Setting addon volcano=true in "addons-835847"
	I0927 00:15:49.248104    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.248635    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222538    8355 addons.go:69] Setting volumesnapshots=true in profile "addons-835847"
	I0927 00:15:49.256202    8355 addons.go:234] Setting addon volumesnapshots=true in "addons-835847"
	I0927 00:15:49.256274    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.256785    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.222585    8355 out.go:177] * Verifying Kubernetes components...
	I0927 00:15:49.222599    8355 addons.go:69] Setting default-storageclass=true in profile "addons-835847"
	I0927 00:15:49.222608    8355 addons.go:69] Setting gcp-auth=true in profile "addons-835847"
	I0927 00:15:49.222612    8355 addons.go:69] Setting ingress=true in profile "addons-835847"
	I0927 00:15:49.222616    8355 addons.go:69] Setting ingress-dns=true in profile "addons-835847"
	I0927 00:15:49.222619    8355 addons.go:69] Setting inspektor-gadget=true in profile "addons-835847"
	I0927 00:15:49.223847    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.329948    8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:49.330072    8355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-835847"
	I0927 00:15:49.330408    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.350114    8355 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:15:49.351911    8355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:15:49.351939    8355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:15:49.352019    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.354730    8355 mustload.go:65] Loading cluster: addons-835847
	I0927 00:15:49.354923    8355 addons.go:234] Setting addon ingress=true in "addons-835847"
	I0927 00:15:49.355212    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.354937    8355 addons.go:234] Setting addon ingress-dns=true in "addons-835847"
	I0927 00:15:49.360553    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.360992    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.354944    8355 addons.go:234] Setting addon inspektor-gadget=true in "addons-835847"
	I0927 00:15:49.373717    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.374196    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.380801    8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:49.381075    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.394128    8355 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:15:49.398993    8355 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:15:49.399016    8355 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:15:49.399082    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.437857    8355 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:15:49.438118    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.438451    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:15:49.438621    8355 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:15:49.442509    8355 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 00:15:49.446301    8355 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:49.446372    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:15:49.446470    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.449372    8355 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:15:49.464901    8355 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 00:15:49.472290    8355 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 00:15:49.483657    8355 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:49.500182    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 00:15:49.473059    8355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:15:49.488427    8355 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:15:49.500343    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:15:49.489831    8355 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-835847"
	I0927 00:15:49.500377    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.500436    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.500987    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.517789    8355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:49.517813    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:15:49.517876    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.500257    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.535902    8355 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:15:49.536534    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:15:49.538001    8355 addons.go:234] Setting addon default-storageclass=true in "addons-835847"
	I0927 00:15:49.538035    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.538439    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:49.553931    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:15:49.554282    8355 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:49.554300    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:15:49.554361    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.572584    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:15:49.572607    8355 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:15:49.572706    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.578453    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.583290    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:15:49.598353    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:15:49.600607    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:15:49.602630    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:15:49.605573    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:15:49.608775    8355 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:15:49.611417    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:15:49.611441    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:15:49.611517    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.620367    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.621485    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.622938    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:49.650248    8355 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:15:49.652375    8355 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:15:49.656232    8355 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:49.656261    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:15:49.656329    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.656476    8355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:49.656550    8355 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:15:49.656586    8355 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:15:49.656651    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.666487    8355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:49.688256    8355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:15:49.690311    8355 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:49.690334    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:15:49.690395    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.696876    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.710508    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.725746    8355 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:15:49.728263    8355 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:15:49.733826    8355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:49.733850    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:15:49.733916    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.748567    8355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:15:49.785551    8355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:49.797247    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.805058    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.823714    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.825436    8355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:49.825456    8355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:15:49.825516    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:49.838148    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.841438    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.842171    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.858504    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.873954    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:49.890019    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:50.505701    8355 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:15:50.505765    8355 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:15:50.732812    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:50.733466    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:50.742111    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:50.781207    8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:15:50.781280    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:15:50.847297    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:50.892400    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:50.953993    8355 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:15:50.954067    8355 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:15:50.963792    8355 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:15:50.963857    8355 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:15:50.995621    8355 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:15:50.995684    8355 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:15:51.004924    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:51.008848    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:51.039095    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:15:51.039168    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:15:51.148311    8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:15:51.148376    8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:15:51.213162    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:51.216366    8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:15:51.216422    8355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:15:51.249541    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:15:51.249616    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:15:51.273897    8355 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:15:51.273964    8355 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:15:51.323569    8355 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:51.323642    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:15:51.335786    8355 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:15:51.335859    8355 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:15:51.420665    8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:51.420738    8355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:15:51.500836    8355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:15:51.500907    8355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:15:51.543095    8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:15:51.543160    8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:15:51.588879    8355 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:15:51.588949    8355 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:15:51.592689    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:15:51.592750    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:15:51.615380    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:51.701327    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:51.710692    8355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:15:51.710756    8355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:15:51.801629    8355 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:51.801698    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:15:51.827115    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:15:51.827145    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:15:51.872650    8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:15:51.872676    8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:15:51.969270    8355 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.22066858s)
	I0927 00:15:51.969302    8355 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:15:51.970327    8355 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.184754644s)
	I0927 00:15:51.971030    8355 node_ready.go:35] waiting up to 6m0s for node "addons-835847" to be "Ready" ...
	I0927 00:15:51.977817    8355 node_ready.go:49] node "addons-835847" has status "Ready":"True"
	I0927 00:15:51.977844    8355 node_ready.go:38] duration metric: took 6.789592ms for node "addons-835847" to be "Ready" ...
	I0927 00:15:51.977855    8355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:51.995041    8355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:52.046176    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:52.152336    8355 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:15:52.152370    8355 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:15:52.270946    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:15:52.270978    8355 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:15:52.294167    8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:15:52.294206    8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:15:52.472947    8355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-835847" context rescaled to 1 replicas
	I0927 00:15:52.493497    8355 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:52.493518    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:15:52.511718    8355 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:52.511793    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:15:52.534147    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:15:52.534178    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:15:52.664040    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:52.761843    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:15:52.761875    8355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:15:52.881054    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:53.046930    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:15:53.046956    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:15:53.286969    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:15:53.287001    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:15:53.603223    8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:53.603289    8355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:15:53.855333    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.12243669s)
	I0927 00:15:53.855408    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.121895145s)
	I0927 00:15:54.002494    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:54.712529    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:56.004941    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:56.634923    8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:15:56.635001    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:56.662590    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:57.597681    8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:15:57.745871    8355 addons.go:234] Setting addon gcp-auth=true in "addons-835847"
	I0927 00:15:57.745924    8355 host.go:66] Checking if "addons-835847" exists ...
	I0927 00:15:57.746398    8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
	I0927 00:15:57.768690    8355 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:15:57.768746    8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
	I0927 00:15:57.831910    8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
	I0927 00:15:58.014887    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:59.375658    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.528276525s)
	I0927 00:15:59.375761    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.483294097s)
	I0927 00:15:59.376025    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.37103202s)
	I0927 00:15:59.376143    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.633940999s)
	I0927 00:15:59.376177    8355 addons.go:475] Verifying addon ingress=true in "addons-835847"
	I0927 00:15:59.380131    8355 out.go:177] * Verifying ingress addon...
	I0927 00:15:59.383229    8355 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:15:59.391660    8355 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:15:59.391721    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:15:59.890827    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:00.387921    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:00.564946    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:00.927767    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.403359    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.873952    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.660709927s)
	I0927 00:16:01.874020    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.258561087s)
	I0927 00:16:01.874036    8355 addons.go:475] Verifying addon registry=true in "addons-835847"
	I0927 00:16:01.874165    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.865250383s)
	I0927 00:16:01.874708    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.173293146s)
	I0927 00:16:01.874737    8355 addons.go:475] Verifying addon metrics-server=true in "addons-835847"
	I0927 00:16:01.874821    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.828591123s)
	I0927 00:16:01.875244    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.211162886s)
	W0927 00:16:01.875285    8355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:01.875304    8355 retry.go:31] will retry after 264.137158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:01.875497    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.994399315s)
	I0927 00:16:01.880045    8355 out.go:177] * Verifying registry addon...
	I0927 00:16:01.880175    8355 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-835847 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:16:01.883090    8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:16:01.923139    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.923741    8355 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:16:01.923802    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.023560    8355 pod_ready.go:98] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:15:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:15:50 +0000 UTC,FinishedAt:2024-09-27 00:16:00 +0000 UTC,ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc Started:0x4001d94e80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x400211dd00} {Name:kube-api-access-skppz MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x400211dd10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:02.023639    8355 pod_ready.go:82] duration metric: took 10.028563935s for pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace to be "Ready" ...
	E0927 00:16:02.023664    8355 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:15:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:15:50 +0000 UTC,FinishedAt:2024-09-27 00:16:00 +0000 UTC,ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc Started:0x4001d94e80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x400211dd00} {Name:kube-api-access-skppz MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x400211dd10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:02.023688    8355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:02.140344    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:02.425985    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.426960    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:02.913892    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.915066    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.004802    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.292222471s)
	I0927 00:16:03.004836    8355 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-835847"
	I0927 00:16:03.005036    8355 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.236324849s)
	I0927 00:16:03.008588    8355 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:16:03.008676    8355 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:03.011501    8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:16:03.013921    8355 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:16:03.015859    8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:16:03.015887    8355 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:16:03.069592    8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:16:03.069663    8355 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:16:03.092488    8355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:16:03.092576    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.204946    8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:03.205015    8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:16:03.247751    8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:03.394435    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.394999    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:03.516052    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.888386    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:03.889463    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.016793    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.029898    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:04.390853    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:04.392044    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.454547    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.314108564s)
	I0927 00:16:04.516881    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.731910    8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.484125546s)
	I0927 00:16:04.735070    8355 addons.go:475] Verifying addon gcp-auth=true in "addons-835847"
	I0927 00:16:04.738677    8355 out.go:177] * Verifying gcp-auth addon...
	I0927 00:16:04.741744    8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:16:04.745200    8355 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:04.888474    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.889138    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.017970    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.387705    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:05.388347    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.517132    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.888919    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:05.889436    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.016224    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.031876    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:06.387664    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.388649    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:06.515824    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.887444    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.888274    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.015920    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.388808    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.389405    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.516535    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.888218    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.888737    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.017091    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.387960    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.388881    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.515958    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.530648    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:08.887089    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.888504    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.016049    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.386673    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.387587    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.515825    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.887951    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.888308    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.016648    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.388793    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.390453    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.516484    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.887655    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.888325    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.016131    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.029965    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:11.388112    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.388615    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.516681    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.887728    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.888826    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.016301    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.386725    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:12.388630    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.516208    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.890979    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:12.892213    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.016588    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.032922    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:13.387947    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.388584    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:13.516541    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.889252    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:13.891793    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.016990    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.389533    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:14.390831    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.516718    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.894676    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.895514    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:15.017302    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.387859    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:15.389071    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:15.516508    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.529584    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:15.887101    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:15.888646    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.018777    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.389290    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:16.390716    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.516332    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.887264    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:16.889460    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.015774    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.387963    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.388683    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:17.517094    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.531195    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:17.888544    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.889428    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:18.016613    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.388543    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:18.389154    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:18.516521    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.902410    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:18.903841    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.018651    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.399223    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:19.400308    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.516727    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.887449    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:19.889276    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.017480    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.031417    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:20.388517    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:20.388958    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.516495    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.889394    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:20.890731    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:21.016460    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.387651    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:21.388807    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:21.517513    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.887693    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:21.889865    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.016402    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.388667    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:22.389687    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.517025    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.530760    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:22.889320    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:22.889887    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.017907    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.388501    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:23.389463    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.515642    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.887643    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.889660    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:24.021051    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.389470    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:24.390423    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:24.516192    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.887227    8355 kapi.go:107] duration metric: took 23.004133499s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:16:24.888650    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:25.016320    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.029836    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:25.387938    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:25.517314    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.887948    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:26.017226    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.388451    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:26.516752    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.890491    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:27.016106    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.387906    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:27.516767    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.531904    8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:27.890543    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:28.020693    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.039434    8355 pod_ready.go:93] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.039462    8355 pod_ready.go:82] duration metric: took 26.015727515s for pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.039474    8355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.044984    8355 pod_ready.go:93] pod "etcd-addons-835847" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.045007    8355 pod_ready.go:82] duration metric: took 5.52514ms for pod "etcd-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.045019    8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.052060    8355 pod_ready.go:93] pod "kube-apiserver-addons-835847" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.052201    8355 pod_ready.go:82] duration metric: took 7.170344ms for pod "kube-apiserver-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.052215    8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.063830    8355 pod_ready.go:93] pod "kube-controller-manager-addons-835847" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.063857    8355 pod_ready.go:82] duration metric: took 11.632529ms for pod "kube-controller-manager-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.063869    8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh55m" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.082626    8355 pod_ready.go:93] pod "kube-proxy-sh55m" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.082697    8355 pod_ready.go:82] duration metric: took 18.819175ms for pod "kube-proxy-sh55m" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.082723    8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.388236    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:28.428590    8355 pod_ready.go:93] pod "kube-scheduler-addons-835847" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.428654    8355 pod_ready.go:82] duration metric: took 345.90873ms for pod "kube-scheduler-addons-835847" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.428680    8355 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.516192    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.828201    8355 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:28.828342    8355 pod_ready.go:82] duration metric: took 399.639998ms for pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:28.828377    8355 pod_ready.go:39] duration metric: took 36.850509663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:28.828486    8355 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:16:28.828652    8355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:16:28.846957    8355 api_server.go:72] duration metric: took 39.628618173s to wait for apiserver process to appear ...
	I0927 00:16:28.847032    8355 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:16:28.847071    8355 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:16:28.855046    8355 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:16:28.856021    8355 api_server.go:141] control plane version: v1.31.1
	I0927 00:16:28.856099    8355 api_server.go:131] duration metric: took 9.045418ms to wait for apiserver health ...
	I0927 00:16:28.856125    8355 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:16:28.887823    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:29.016445    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:29.036384    8355 system_pods.go:59] 17 kube-system pods found
	I0927 00:16:29.036420    8355 system_pods.go:61] "coredns-7c65d6cfc9-tvzhv" [a2efa460-a57a-45eb-8364-cf85abad82cf] Running
	I0927 00:16:29.036429    8355 system_pods.go:61] "csi-hostpath-attacher-0" [c3869238-f637-430d-b854-92bd76cc44fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:29.036437    8355 system_pods.go:61] "csi-hostpath-resizer-0" [4b297f3d-aeaa-4a5d-8b74-f7174019b812] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:29.036477    8355 system_pods.go:61] "csi-hostpathplugin-jmcgj" [277b306d-cb93-419b-8cee-55a5570d009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:29.036488    8355 system_pods.go:61] "etcd-addons-835847" [79b5e8ee-117c-48e1-baab-fe318fca930e] Running
	I0927 00:16:29.036493    8355 system_pods.go:61] "kube-apiserver-addons-835847" [579d15f4-818a-4fbc-a6db-23d34aeffea8] Running
	I0927 00:16:29.036498    8355 system_pods.go:61] "kube-controller-manager-addons-835847" [ee2d4237-f212-4068-ba76-af07caa6a2fa] Running
	I0927 00:16:29.036521    8355 system_pods.go:61] "kube-ingress-dns-minikube" [afb9c90d-de1d-4d41-a089-c58d2ad953f4] Running
	I0927 00:16:29.036526    8355 system_pods.go:61] "kube-proxy-sh55m" [d5ff899a-b75e-429d-bc03-d269a2a48ce2] Running
	I0927 00:16:29.036530    8355 system_pods.go:61] "kube-scheduler-addons-835847" [1d263758-8b84-4b6c-995e-7b727372026c] Running
	I0927 00:16:29.036537    8355 system_pods.go:61] "metrics-server-84c5f94fbc-5ck7c" [b1561527-6ede-4c7d-89b0-dc3e89f14879] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:29.036545    8355 system_pods.go:61] "nvidia-device-plugin-daemonset-pxf2p" [29a178e4-9317-46ca-b2a2-4a1fa8ca2860] Running
	I0927 00:16:29.036551    8355 system_pods.go:61] "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
	I0927 00:16:29.036561    8355 system_pods.go:61] "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
	I0927 00:16:29.036570    8355 system_pods.go:61] "snapshot-controller-56fcc65765-gdzpx" [f2bb6dd6-0f43-437c-a5b9-d91f084332f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:29.036583    8355 system_pods.go:61] "snapshot-controller-56fcc65765-jjm9x" [6aa4e47b-5902-4da5-a4a9-f6ccd932944c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:29.036588    8355 system_pods.go:61] "storage-provisioner" [17e1dac7-0278-4861-bbf2-9b70936db1b4] Running
	I0927 00:16:29.036599    8355 system_pods.go:74] duration metric: took 180.446819ms to wait for pod list to return data ...
	I0927 00:16:29.036606    8355 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:16:29.229001    8355 default_sa.go:45] found service account: "default"
	I0927 00:16:29.229027    8355 default_sa.go:55] duration metric: took 192.414136ms for default service account to be created ...
	I0927 00:16:29.229040    8355 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:16:29.388055    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:29.436020    8355 system_pods.go:86] 17 kube-system pods found
	I0927 00:16:29.436052    8355 system_pods.go:89] "coredns-7c65d6cfc9-tvzhv" [a2efa460-a57a-45eb-8364-cf85abad82cf] Running
	I0927 00:16:29.436074    8355 system_pods.go:89] "csi-hostpath-attacher-0" [c3869238-f637-430d-b854-92bd76cc44fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:29.436082    8355 system_pods.go:89] "csi-hostpath-resizer-0" [4b297f3d-aeaa-4a5d-8b74-f7174019b812] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:29.436090    8355 system_pods.go:89] "csi-hostpathplugin-jmcgj" [277b306d-cb93-419b-8cee-55a5570d009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:29.436103    8355 system_pods.go:89] "etcd-addons-835847" [79b5e8ee-117c-48e1-baab-fe318fca930e] Running
	I0927 00:16:29.436112    8355 system_pods.go:89] "kube-apiserver-addons-835847" [579d15f4-818a-4fbc-a6db-23d34aeffea8] Running
	I0927 00:16:29.436117    8355 system_pods.go:89] "kube-controller-manager-addons-835847" [ee2d4237-f212-4068-ba76-af07caa6a2fa] Running
	I0927 00:16:29.436137    8355 system_pods.go:89] "kube-ingress-dns-minikube" [afb9c90d-de1d-4d41-a089-c58d2ad953f4] Running
	I0927 00:16:29.436141    8355 system_pods.go:89] "kube-proxy-sh55m" [d5ff899a-b75e-429d-bc03-d269a2a48ce2] Running
	I0927 00:16:29.436145    8355 system_pods.go:89] "kube-scheduler-addons-835847" [1d263758-8b84-4b6c-995e-7b727372026c] Running
	I0927 00:16:29.436159    8355 system_pods.go:89] "metrics-server-84c5f94fbc-5ck7c" [b1561527-6ede-4c7d-89b0-dc3e89f14879] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:29.436167    8355 system_pods.go:89] "nvidia-device-plugin-daemonset-pxf2p" [29a178e4-9317-46ca-b2a2-4a1fa8ca2860] Running
	I0927 00:16:29.436186    8355 system_pods.go:89] "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
	I0927 00:16:29.436191    8355 system_pods.go:89] "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
	I0927 00:16:29.436198    8355 system_pods.go:89] "snapshot-controller-56fcc65765-gdzpx" [f2bb6dd6-0f43-437c-a5b9-d91f084332f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:29.436209    8355 system_pods.go:89] "snapshot-controller-56fcc65765-jjm9x" [6aa4e47b-5902-4da5-a4a9-f6ccd932944c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:29.436213    8355 system_pods.go:89] "storage-provisioner" [17e1dac7-0278-4861-bbf2-9b70936db1b4] Running
	I0927 00:16:29.436221    8355 system_pods.go:126] duration metric: took 207.17573ms to wait for k8s-apps to be running ...
	I0927 00:16:29.436232    8355 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:16:29.436286    8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:16:29.448511    8355 system_svc.go:56] duration metric: took 12.260841ms WaitForService to wait for kubelet
	I0927 00:16:29.448539    8355 kubeadm.go:582] duration metric: took 40.230205109s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:16:29.448557    8355 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:16:29.515913    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:29.628841    8355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 00:16:29.628870    8355 node_conditions.go:123] node cpu capacity is 2
	I0927 00:16:29.628884    8355 node_conditions.go:105] duration metric: took 180.321357ms to run NodePressure ...
	I0927 00:16:29.628897    8355 start.go:241] waiting for startup goroutines ...
	I0927 00:16:29.887901    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:30.019814    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:30.388655    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:30.516276    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:30.888309    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:31.016826    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:31.387344    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:31.515798    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:31.887855    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:32.016762    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:32.388644    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:32.516703    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:32.889323    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:33.016690    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:33.388234    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:33.517506    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:33.889666    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:34.017067    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:34.393336    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:34.516704    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:34.888972    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:35.017502    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:35.387989    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:35.516621    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:35.889260    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:36.016903    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:36.387590    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:36.516484    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:36.889251    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:37.018454    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:37.387175    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:37.516855    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:37.888421    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.017428    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:38.388008    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.516284    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:38.887736    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.017081    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.388661    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.516244    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.889025    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.016646    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.388525    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.516180    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.888744    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.016698    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.388157    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.516980    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.887919    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.017076    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.388434    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.516434    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.890362    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.016994    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.388677    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.517754    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.888618    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.016243    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.388699    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.516696    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.889791    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.017419    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.389248    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.517001    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.887186    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.016775    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.388771    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.520835    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.888618    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.016245    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.391068    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.517463    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.888830    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.017254    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.387928    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.516380    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.888506    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.015714    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.387648    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.516299    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.950022    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.017022    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.388554    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.516494    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.887896    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.018663    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.387606    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.515757    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.889945    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.016739    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.389570    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.517201    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.888501    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.015958    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.387557    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.516723    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.887963    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.016545    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.387781    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.517994    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.887968    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.016658    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.387368    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.515881    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.887408    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.015711    8355 kapi.go:107] duration metric: took 53.004208057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:16:56.387360    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.888650    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.387095    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.887924    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.387462    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.887369    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.388194    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.886836    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.387644    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.888249    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.387981    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.888133    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.387236    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.888603    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.387580    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.887910    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.388113    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.888457    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.388422    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.887789    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.388454    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.889014    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.387580    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.888363    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.387758    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.888169    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.390317    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.892648    8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.388646    8355 kapi.go:107] duration metric: took 1m11.005413483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:17:26.745172    8355 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:17:26.745192    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.246153    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.746026    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.246045    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.745918    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.245137    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.746505    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.246223    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.746210    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.246016    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.745715    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.245864    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.745127    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.246038    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.746298    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.245524    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.745458    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.245433    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.745396    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.246003    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.746234    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.245588    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.745315    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.245131    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.745966    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.245650    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.745247    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.245909    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.745846    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.247733    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.744900    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.245467    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.744951    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.245524    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.745418    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.245165    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.746106    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.246490    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.745418    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.245102    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.745921    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.245855    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.745223    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.245196    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.745936    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.245741    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.746109    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.246399    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.745085    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.246212    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.745711    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.245340    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.745492    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.246078    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.745880    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.245628    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.745948    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.245148    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.745871    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.245557    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.746716    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.245215    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.746119    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.245631    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.746046    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.245994    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.746862    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.245370    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.745784    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.245813    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.745437    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.245821    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.746479    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.245303    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.745102    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.245541    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.745328    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.245806    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.746644    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.244798    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.746659    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.246001    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.747134    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.245867    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.746814    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.245856    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.745399    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.245697    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.745722    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:11.252354    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:11.745387    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:12.245076    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:12.745376    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:13.245477    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:13.745643    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:14.246042    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:14.746022    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:15.245407    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:15.746454    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:16.244840    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:16.745555    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:17.245395    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:17.746617    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:18.245144    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:18.745849    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:19.245721    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:19.745979    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:20.245412    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:20.745538    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:21.245592    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:21.746382    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:22.245099    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:22.745758    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:23.245483    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:23.745442    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:24.246518    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:24.745272    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:25.245555    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:25.746792    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:26.245291    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:26.745704    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:27.246063    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:27.745810    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:28.245529    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:28.745619    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:29.244701    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:29.746460    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:30.245105    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:30.746103    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:31.245003    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:31.745955    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:32.245975    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:32.746702    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:33.245666    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:33.747981    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:34.245319    8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:34.745738    8355 kapi.go:107] duration metric: took 2m30.003995474s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:18:34.748011    8355 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-835847 cluster.
	I0927 00:18:34.750026    8355 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:18:34.751842    8355 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:18:34.753590    8355 out.go:177] * Enabled addons: ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0927 00:18:34.755367    8355 addons.go:510] duration metric: took 2m45.536656652s for enable addons: enabled=[ingress-dns storage-provisioner-rancher nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0927 00:18:34.755422    8355 start.go:246] waiting for cluster config update ...
	I0927 00:18:34.755444    8355 start.go:255] writing updated cluster config ...
	I0927 00:18:34.755733    8355 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:35.068794    8355 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:35.071548    8355 out.go:177] * Done! kubectl is now configured to use "addons-835847" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 00:28:14 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3f209215e53b1404b23abee7940546f4ed9bc07968b89ddefbf1b54b6d86a97/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 27 00:28:15 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:15Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 27 00:28:21 addons-835847 dockerd[1284]: time="2024-09-27T00:28:21.556827857Z" level=info msg="ignoring event" container=50c5d43ae3331eab81e85bf51ff3a6948a48a7376e6e07e5b573e02ad258d826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:21 addons-835847 dockerd[1284]: time="2024-09-27T00:28:21.681046451Z" level=info msg="ignoring event" container=e3f209215e53b1404b23abee7940546f4ed9bc07968b89ddefbf1b54b6d86a97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.278515633Z" level=info msg="ignoring event" container=571b69b635a8758a3c5aa749398f0879078b7d33540952e5ac828ae81f196d31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.371556942Z" level=info msg="ignoring event" container=5dd7b2109168a8b604240546dca351210663bb3178aceb985051dbd6e7404e4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.383623841Z" level=info msg="ignoring event" container=047431751ba787cac6ea12753334a27a03bf4b7085faff84c16c331691a147e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.398696379Z" level=info msg="ignoring event" container=02872af86ff36ab5ee26842c998c09ed342f6bb76d30ecbc4538c3419094b914 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.406084270Z" level=info msg="ignoring event" container=f7c03209d552a21f72906d523edac4a2e0bac058c2dadcef925f03deb32e9e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.410017667Z" level=info msg="ignoring event" container=78e4b11a9cc699c0613269f6e265bfaa2e751028449ef840d122df88a62ff7d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.454200660Z" level=info msg="ignoring event" container=3b9efd2fa9867692860a2beba3c9d0a330e8d0fc7d58b419f108397ec01dbc25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.471929832Z" level=info msg="ignoring event" container=de679a3c1cb52e2944f8d62e62db7dcb077635c9345283d125811fdf765cb58a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.600275710Z" level=info msg="ignoring event" container=4d2b09e15e6f09fe7504e0fa848e4ac215abbbdc071276c6ac863a7529578970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.669481033Z" level=info msg="ignoring event" container=232d9b0f19a3c81c1c3c3d96c4badbd7237201333ecbc54a3a4729a13ee65d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.707077300Z" level=info msg="ignoring event" container=9369eed71bdf12db0359ac78626ed55dcfcca30e4228cb5c7be746876b525a4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:29 addons-835847 dockerd[1284]: time="2024-09-27T00:28:29.886276734Z" level=info msg="ignoring event" container=444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:29 addons-835847 dockerd[1284]: time="2024-09-27T00:28:29.901546157Z" level=info msg="ignoring event" container=d971adaad037c1f787aeeaef3fb4f1643b6b34322f2eb4c4f5b32457b49fbcf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.065645242Z" level=info msg="ignoring event" container=25811c9ebe8787cf62dc73e5ffce884af6314161cdf7bd96a0c6ce670726fb7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.143754606Z" level=info msg="ignoring event" container=7ceb584e51fd3c870e423f721f931dd271a50234e0e0ab5324aff5e593fad238 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.500867739Z" level=info msg="ignoring event" container=c94749d12ab9612939db7c0a7fbeba8c61c6bf02890b760c127d1c6bbb5c634a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.053018304Z" level=info msg="ignoring event" container=7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.133491160Z" level=info msg="ignoring event" container=925ecbc7f57cabd7ab39dc74270d893f34d143bb4c0be03ecd693fe771229367 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.333453560Z" level=info msg="ignoring event" container=f6b1af5fb78c37bf2b8d6a858a1c710dc43b137dac00bc49e2c4699dc339cba1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.452934553Z" level=info msg="ignoring event" container=6f5a76e7c0d42e555b12a2554b17dbc681d53cca9e0ea34e6b08883b506053dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:31 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:31Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-pn662_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6f5a76e7c0d42e555b12a2554b17dbc681d53cca9e0ea34e6b08883b506053dc\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	ecc4d5db675d2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            36 seconds ago      Exited              gadget                     7                   be6deb646d3ab       gadget-vgn99
	2c231eb43d312       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   59fda7f313330       gcp-auth-89d5ffd79-zh66d
	9363655aec613       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   f016c22499a0d       ingress-nginx-controller-bc57996ff-p24hh
	5a67fde840c70       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   2ee370dae128c       ingress-nginx-admission-patch-2vwz7
	c241abdb2d040       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   016b997eb5c41       ingress-nginx-admission-create-zmrlk
	062615b753050       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   cea03d1f057cf       yakd-dashboard-67d98fc6b-5zgdz
	b8a26a48dd957       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server             0                   aceb0abcbba69       metrics-server-84c5f94fbc-5ck7c
	dee6d4ee2c7c8       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   62f5d14093b8a       local-path-provisioner-86d989889c-q2t26
	e24271cdd951e       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator     0                   04dbcab398158       cloud-spanner-emulator-5b584cc74-nmwkl
	dab4141b089d3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   3331e14188e68       kube-ingress-dns-minikube
	f16d5fef10c74       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   547d52adae155       nvidia-device-plugin-daemonset-pxf2p
	7f26961be53af       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   4d3a88573a6cd       storage-provisioner
	5b8816dfaa21d       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   f6f0726e5877d       coredns-7c65d6cfc9-tvzhv
	a8d675bfc6703       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   f68aef1cccba2       kube-proxy-sh55m
	32e135bfac565       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   ecb9dc9058b4f       kube-controller-manager-addons-835847
	c9890999f9cce       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   fb5ff4baef843       kube-scheduler-addons-835847
	0ba8c27835f8a       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   7226344b1c107       etcd-addons-835847
	d5a41b0f8c76b       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   08ffb47a70cbe       kube-apiserver-addons-835847
	
	
	==> controller_ingress [9363655aec61] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0927 00:17:09.101883       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0927 00:17:09.931154       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0927 00:17:09.992269       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0927 00:17:10.006565       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0927 00:17:10.027496       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a1815dc4-44fb-4e28-9732-b133414d44e7", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0927 00:17:10.036185       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d1d8441a-17c1-43d1-8f2b-6332918e5a69", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0927 00:17:10.036222       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"3a094131-3bd8-4760-91d4-c48fc23e469a", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0927 00:17:11.208141       7 nginx.go:317] "Starting NGINX process"
	I0927 00:17:11.208253       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0927 00:17:11.208487       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0927 00:17:11.208657       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0927 00:17:11.217218       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0927 00:17:11.218203       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-p24hh"
	I0927 00:17:11.227246       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-p24hh" node="addons-835847"
	I0927 00:17:11.252216       7 controller.go:213] "Backend successfully reloaded"
	I0927 00:17:11.252303       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0927 00:17:11.252452       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-p24hh", UID:"b6d5039f-a25c-4fdd-a953-8ab0bdc94a32", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [5b8816dfaa21] <==
	[INFO] 10.244.0.8:44184 - 51573 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092372s
	[INFO] 10.244.0.8:44184 - 24699 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002398412s
	[INFO] 10.244.0.8:44184 - 2339 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002213111s
	[INFO] 10.244.0.8:44184 - 48605 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145966s
	[INFO] 10.244.0.8:44184 - 1809 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010989s
	[INFO] 10.244.0.8:49095 - 57180 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000295396s
	[INFO] 10.244.0.8:49095 - 57418 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105737s
	[INFO] 10.244.0.8:50055 - 27977 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063195s
	[INFO] 10.244.0.8:50055 - 28413 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067543s
	[INFO] 10.244.0.8:57616 - 59023 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074485s
	[INFO] 10.244.0.8:57616 - 59204 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038736s
	[INFO] 10.244.0.8:41998 - 38974 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003091405s
	[INFO] 10.244.0.8:41998 - 38802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003982752s
	[INFO] 10.244.0.8:35760 - 21191 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079301s
	[INFO] 10.244.0.8:35760 - 21380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040918s
	[INFO] 10.244.0.25:58942 - 63161 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195516s
	[INFO] 10.244.0.25:47346 - 49039 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140125s
	[INFO] 10.244.0.25:33720 - 9844 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127251s
	[INFO] 10.244.0.25:49152 - 46325 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091707s
	[INFO] 10.244.0.25:40956 - 39954 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096876s
	[INFO] 10.244.0.25:59262 - 12709 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086341s
	[INFO] 10.244.0.25:52788 - 61254 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002751226s
	[INFO] 10.244.0.25:57471 - 47251 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002596284s
	[INFO] 10.244.0.25:35907 - 20023 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00200994s
	[INFO] 10.244.0.25:33455 - 57395 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001557271s
	
	
	==> describe nodes <==
	Name:               addons-835847
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-835847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-835847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_15_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-835847
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-835847
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:28:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:27:47 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:27:47 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:27:47 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:27:47 +0000   Fri, 27 Sep 2024 00:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-835847
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa692830ed544dfda90a0dde21ecfabb
	  System UUID:                d753872c-7080-4426-b42a-b70d7a7c1bc7
	  Boot ID:                    fe6ac0e5-a46e-47ee-84bc-0bc2ad3e866e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-nmwkl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-vgn99                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-zh66d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-p24hh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-tvzhv                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-835847                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-835847                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-835847       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-sh55m                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-835847                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-5ck7c             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-pxf2p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-q2t26     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-5zgdz              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             588Mi (7%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-835847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-835847 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-835847 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-835847 event: Registered Node addons-835847 in Controller
	
	
	==> dmesg <==
	[Sep26 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014578] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.452316] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.063261] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.019563] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.667102] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.021396] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [0ba8c27835f8] <==
	{"level":"info","ts":"2024-09-27T00:15:37.505502Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T00:15:37.505537Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T00:15:37.676115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T00:15:37.676395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T00:15:37.676571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-27T00:15:37.676727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:37.676867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:37.677028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:37.677136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:37.679697Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:37.682361Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-835847 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T00:15:37.684129Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:15:37.684770Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:15:37.685770Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:37.686940Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:15:37.688129Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:37.691721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:37.691878Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:37.688853Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:37.693169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-27T00:15:37.694900Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T00:15:37.694928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T00:25:38.427524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
	{"level":"info","ts":"2024-09-27T00:25:38.472611Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"44.194266ms","hash":3342251583,"current-db-size-bytes":9072640,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4923392,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-27T00:25:38.472659Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3342251583,"revision":1862,"compact-revision":-1}
	
	
	==> gcp-auth [2c231eb43d31] <==
	2024/09/27 00:18:34 GCP Auth Webhook started!
	2024/09/27 00:18:51 Ready to marshal response ...
	2024/09/27 00:18:51 Ready to write response ...
	2024/09/27 00:18:52 Ready to marshal response ...
	2024/09/27 00:18:52 Ready to write response ...
	2024/09/27 00:19:15 Ready to marshal response ...
	2024/09/27 00:19:15 Ready to write response ...
	2024/09/27 00:19:16 Ready to marshal response ...
	2024/09/27 00:19:16 Ready to write response ...
	2024/09/27 00:19:16 Ready to marshal response ...
	2024/09/27 00:19:16 Ready to write response ...
	2024/09/27 00:27:19 Ready to marshal response ...
	2024/09/27 00:27:19 Ready to write response ...
	2024/09/27 00:27:19 Ready to marshal response ...
	2024/09/27 00:27:19 Ready to write response ...
	2024/09/27 00:27:19 Ready to marshal response ...
	2024/09/27 00:27:19 Ready to write response ...
	2024/09/27 00:27:30 Ready to marshal response ...
	2024/09/27 00:27:30 Ready to write response ...
	2024/09/27 00:27:48 Ready to marshal response ...
	2024/09/27 00:27:48 Ready to write response ...
	2024/09/27 00:28:14 Ready to marshal response ...
	2024/09/27 00:28:14 Ready to write response ...
	
	
	==> kernel <==
	 00:28:32 up  1:11,  0 users,  load average: 0.28, 0.36, 0.38
	Linux addons-835847 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d5a41b0f8c76] <==
	I0927 00:19:06.424299       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:19:06.502332       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:19:06.572962       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0927 00:19:06.683568       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0927 00:19:06.947479       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0927 00:19:07.163385       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 00:19:07.163412       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0927 00:19:07.168280       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0927 00:19:07.573264       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 00:19:07.833031       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0927 00:27:19.836774       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.230.114"}
	I0927 00:27:56.062605       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:28:29.601054       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:29.601116       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:29.622077       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:29.622236       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:29.629209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:29.629248       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:29.672624       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:29.673081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:29.769430       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:29.769470       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:28:30.623389       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:28:30.770440       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:28:30.879449       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [32e135bfac56] <==
	E0927 00:27:58.022590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:05.100394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:05.100440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:07.893216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:07.893262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:11.238307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:11.238351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:15.488532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:15.488605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:23.184354       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0927 00:28:23.276676       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0927 00:28:23.447457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-835847"
	W0927 00:28:23.932780       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:23.932828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:29.799098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="6.556µs"
	E0927 00:28:30.625717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:28:30.772177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:28:30.881661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:30.968506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.884µs"
	W0927 00:28:32.174814       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:32.174854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:32.275202       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:32.275246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:32.337181       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:32.337231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a8d675bfc670] <==
	I0927 00:15:50.541800       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:15:50.655046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:15:50.655117       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:50.696958       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:15:50.697012       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:50.698509       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:50.698823       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:50.698837       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:50.700772       1 config.go:199] "Starting service config controller"
	I0927 00:15:50.700797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:50.700826       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:50.700830       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:50.700840       1 config.go:328] "Starting node config controller"
	I0927 00:15:50.700852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:50.801108       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:15:50.801161       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:15:50.801361       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9890999f9cc] <==
	W0927 00:15:42.161470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0927 00:15:42.161566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:15:42.161636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0927 00:15:42.161498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.161789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:42.161818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.161945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:15:42.162013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:15:42.162107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:15:42.162381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:15:42.162646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:42.162972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:15:42.163235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:15:42.163777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:15:42.164194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:42.162301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:15:42.164432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:15:43.749707       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.411030    2340 scope.go:117] "RemoveContainer" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.457974    2340 scope.go:117] "RemoveContainer" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
	Sep 27 00:28:30 addons-835847 kubelet[2340]: E0927 00:28:30.459227    2340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.459260    2340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"} err="failed to get container status \"444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.602967    2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds\") pod \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\" (UID: \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\") "
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.603156    2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhmht\" (UniqueName: \"kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht\") pod \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\" (UID: \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\") "
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.603451    2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5a9bb701-661e-4834-9c85-c40c6ad26b6f" (UID: "5a9bb701-661e-4834-9c85-c40c6ad26b6f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.605369    2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht" (OuterVolumeSpecName: "kube-api-access-rhmht") pod "5a9bb701-661e-4834-9c85-c40c6ad26b6f" (UID: "5a9bb701-661e-4834-9c85-c40c6ad26b6f"). InnerVolumeSpecName "kube-api-access-rhmht". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.703556    2340 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds\") on node \"addons-835847\" DevicePath \"\""
	Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.703596    2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rhmht\" (UniqueName: \"kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht\") on node \"addons-835847\" DevicePath \"\""
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.512518    2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vk4p\" (UniqueName: \"kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p\") pod \"7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2\" (UID: \"7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2\") "
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.524408    2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p" (OuterVolumeSpecName: "kube-api-access-4vk4p") pod "7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2" (UID: "7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2"). InnerVolumeSpecName "kube-api-access-4vk4p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.544832    2340 scope.go:117] "RemoveContainer" containerID="925ecbc7f57cabd7ab39dc74270d893f34d143bb4c0be03ecd693fe771229367"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.588472    2340 scope.go:117] "RemoveContainer" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.612774    2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xmd\" (UniqueName: \"kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd\") pod \"eb773589-5926-4f4f-8548-d2dee389a285\" (UID: \"eb773589-5926-4f4f-8548-d2dee389a285\") "
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.612915    2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4vk4p\" (UniqueName: \"kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p\") on node \"addons-835847\" DevicePath \"\""
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.616004    2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd" (OuterVolumeSpecName: "kube-api-access-j7xmd") pod "eb773589-5926-4f4f-8548-d2dee389a285" (UID: "eb773589-5926-4f4f-8548-d2dee389a285"). InnerVolumeSpecName "kube-api-access-j7xmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.676823    2340 scope.go:117] "RemoveContainer" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: E0927 00:28:31.678316    2340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.678367    2340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"} err="failed to get container status \"7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.713716    2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j7xmd\" (UniqueName: \"kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd\") on node \"addons-835847\" DevicePath \"\""
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.819384    2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9bb701-661e-4834-9c85-c40c6ad26b6f" path="/var/lib/kubelet/pods/5a9bb701-661e-4834-9c85-c40c6ad26b6f/volumes"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.819811    2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa4e47b-5902-4da5-a4a9-f6ccd932944c" path="/var/lib/kubelet/pods/6aa4e47b-5902-4da5-a4a9-f6ccd932944c/volumes"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.820354    2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2" path="/var/lib/kubelet/pods/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2/volumes"
	Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.820715    2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb6dd6-0f43-437c-a5b9-d91f084332f5" path="/var/lib/kubelet/pods/f2bb6dd6-0f43-437c-a5b9-d91f084332f5/volumes"
	
	
	==> storage-provisioner [7f26961be53a] <==
	I0927 00:15:56.338504       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:56.358085       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:56.358171       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:56.384983       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:56.385369       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"175f1bd0-9e7f-4586-abbf-cb5aea70e889", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e became leader
	I0927 00:15:56.385395       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e!
	I0927 00:15:56.486781       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-835847 -n addons-835847
helpers_test.go:261: (dbg) Run:  kubectl --context addons-835847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7: exit status 1 (93.226593ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-835847/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 00:19:16 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29ff8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-29ff8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-835847
	  Warning  Failed     7m58s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m44s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m14s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zmrlk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2vwz7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.31s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 5.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
22 TestOffline 52.84
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 219.55
29 TestAddons/serial/Volcano 40.71
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 17.32
35 TestAddons/parallel/InspektorGadget 10.72
36 TestAddons/parallel/MetricsServer 6.68
38 TestAddons/parallel/CSI 53.56
39 TestAddons/parallel/Headlamp 17.49
40 TestAddons/parallel/CloudSpanner 5.49
41 TestAddons/parallel/LocalPath 53.04
42 TestAddons/parallel/NvidiaDevicePlugin 5.45
43 TestAddons/parallel/Yakd 10.87
44 TestAddons/StoppedEnableDisable 6.01
45 TestCertOptions 43.91
46 TestCertExpiration 247.46
47 TestDockerFlags 43.11
48 TestForceSystemdFlag 44.81
49 TestForceSystemdEnv 39.63
55 TestErrorSpam/setup 30.65
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 0.98
58 TestErrorSpam/pause 1.3
59 TestErrorSpam/unpause 1.46
60 TestErrorSpam/stop 10.91
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 40.07
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 31.89
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
72 TestFunctional/serial/CacheCmd/cache/add_local 0.94
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 41.08
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.08
83 TestFunctional/serial/LogsFileCmd 1.17
84 TestFunctional/serial/InvalidService 4.79
86 TestFunctional/parallel/ConfigCmd 0.45
87 TestFunctional/parallel/DashboardCmd 12.33
88 TestFunctional/parallel/DryRun 0.44
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1
94 TestFunctional/parallel/ServiceCmdConnect 13.61
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 26.6
98 TestFunctional/parallel/SSHCmd 0.65
99 TestFunctional/parallel/CpCmd 2.09
101 TestFunctional/parallel/FileSync 0.36
102 TestFunctional/parallel/CertSync 1.91
106 TestFunctional/parallel/NodeLabels 0.13
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
110 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
124 TestFunctional/parallel/ProfileCmd/profile_list 0.43
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
126 TestFunctional/parallel/MountCmd/any-port 7.99
127 TestFunctional/parallel/ServiceCmd/List 0.48
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
130 TestFunctional/parallel/ServiceCmd/Format 0.36
131 TestFunctional/parallel/ServiceCmd/URL 0.49
132 TestFunctional/parallel/MountCmd/specific-port 2.29
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.26
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.05
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.28
141 TestFunctional/parallel/ImageCommands/Setup 0.73
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
148 TestFunctional/parallel/DockerEnv/bash 1.28
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 121.29
160 TestMultiControlPlane/serial/DeployApp 42.18
161 TestMultiControlPlane/serial/PingHostFromPods 1.56
162 TestMultiControlPlane/serial/AddWorkerNode 25.02
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
165 TestMultiControlPlane/serial/CopyFile 18.31
166 TestMultiControlPlane/serial/StopSecondaryNode 11.84
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
168 TestMultiControlPlane/serial/RestartSecondaryNode 65.43
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 244.39
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.06
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
173 TestMultiControlPlane/serial/StopCluster 32.74
174 TestMultiControlPlane/serial/RestartCluster 90.97
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
176 TestMultiControlPlane/serial/AddSecondaryNode 45.52
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
180 TestImageBuild/serial/Setup 31.31
181 TestImageBuild/serial/NormalBuild 1.94
182 TestImageBuild/serial/BuildWithBuildArg 1.08
183 TestImageBuild/serial/BuildWithDockerIgnore 0.78
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.9
188 TestJSONOutput/start/Command 75
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.58
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.51
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.71
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.22
213 TestKicCustomNetwork/create_custom_network 33.46
214 TestKicCustomNetwork/use_default_bridge_network 31.32
215 TestKicExistingNetwork 32.67
216 TestKicCustomSubnet 35.73
217 TestKicStaticIP 35.15
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 74.88
222 TestMountStart/serial/StartWithMountFirst 8.16
223 TestMountStart/serial/VerifyMountFirst 0.25
224 TestMountStart/serial/StartWithMountSecond 7.57
225 TestMountStart/serial/VerifyMountSecond 0.25
226 TestMountStart/serial/DeleteFirst 1.49
227 TestMountStart/serial/VerifyMountPostDelete 0.25
228 TestMountStart/serial/Stop 1.2
229 TestMountStart/serial/RestartStopped 8.04
230 TestMountStart/serial/VerifyMountPostStop 0.25
233 TestMultiNode/serial/FreshStart2Nodes 79.31
234 TestMultiNode/serial/DeployApp2Nodes 48.08
235 TestMultiNode/serial/PingHostFrom2Pods 1.05
236 TestMultiNode/serial/AddNode 17.82
237 TestMultiNode/serial/MultiNodeLabels 0.1
238 TestMultiNode/serial/ProfileList 0.67
239 TestMultiNode/serial/CopyFile 9.61
240 TestMultiNode/serial/StopNode 2.17
241 TestMultiNode/serial/StartAfterStop 10.35
242 TestMultiNode/serial/RestartKeepsNodes 115.91
243 TestMultiNode/serial/DeleteNode 5.47
244 TestMultiNode/serial/StopMultiNode 21.83
245 TestMultiNode/serial/RestartMultiNode 51.52
246 TestMultiNode/serial/ValidateNameConflict 37.41
251 TestPreload 148.4
253 TestScheduledStopUnix 105.38
254 TestSkaffold 117.12
256 TestInsufficientStorage 10.87
257 TestRunningBinaryUpgrade 81.7
259 TestKubernetesUpgrade 384.05
260 TestMissingContainerUpgrade 113.5
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 44.28
264 TestNoKubernetes/serial/StartWithStopK8s 19.31
276 TestNoKubernetes/serial/Start 9.99
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
278 TestNoKubernetes/serial/ProfileList 1.1
279 TestNoKubernetes/serial/Stop 1.28
280 TestNoKubernetes/serial/StartNoArgs 8.36
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
282 TestStoppedBinaryUpgrade/Setup 0.79
283 TestStoppedBinaryUpgrade/Upgrade 123.44
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
293 TestPause/serial/Start 45.9
294 TestPause/serial/SecondStartNoReconfiguration 26.89
295 TestPause/serial/Pause 0.67
296 TestPause/serial/VerifyStatus 0.36
297 TestPause/serial/Unpause 0.48
298 TestPause/serial/PauseAgain 0.68
299 TestPause/serial/DeletePaused 2.23
300 TestPause/serial/VerifyDeletedResources 0.35
301 TestNetworkPlugins/group/auto/Start 75.04
302 TestNetworkPlugins/group/auto/KubeletFlags 0.26
303 TestNetworkPlugins/group/auto/NetCatPod 10.28
304 TestNetworkPlugins/group/auto/DNS 0.23
305 TestNetworkPlugins/group/auto/Localhost 0.15
306 TestNetworkPlugins/group/auto/HairPin 0.15
307 TestNetworkPlugins/group/kindnet/Start 75.26
308 TestNetworkPlugins/group/calico/Start 67.05
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
311 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
312 TestNetworkPlugins/group/kindnet/DNS 0.27
313 TestNetworkPlugins/group/kindnet/Localhost 0.2
314 TestNetworkPlugins/group/kindnet/HairPin 0.23
315 TestNetworkPlugins/group/custom-flannel/Start 59.79
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.41
318 TestNetworkPlugins/group/calico/NetCatPod 13.48
319 TestNetworkPlugins/group/calico/DNS 0.26
320 TestNetworkPlugins/group/calico/Localhost 0.22
321 TestNetworkPlugins/group/calico/HairPin 0.2
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
324 TestNetworkPlugins/group/false/Start 52.94
325 TestNetworkPlugins/group/custom-flannel/DNS 0.21
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
328 TestNetworkPlugins/group/enable-default-cni/Start 49.89
329 TestNetworkPlugins/group/false/KubeletFlags 0.34
330 TestNetworkPlugins/group/false/NetCatPod 11.34
331 TestNetworkPlugins/group/false/DNS 0.26
332 TestNetworkPlugins/group/false/Localhost 0.22
333 TestNetworkPlugins/group/false/HairPin 0.21
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.35
336 TestNetworkPlugins/group/flannel/Start 62.75
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.34
340 TestNetworkPlugins/group/bridge/Start 51.05
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
343 TestNetworkPlugins/group/flannel/NetCatPod 10.33
344 TestNetworkPlugins/group/flannel/DNS 0.18
345 TestNetworkPlugins/group/flannel/Localhost 0.18
346 TestNetworkPlugins/group/flannel/HairPin 0.19
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
348 TestNetworkPlugins/group/bridge/NetCatPod 10.27
349 TestNetworkPlugins/group/bridge/DNS 0.25
350 TestNetworkPlugins/group/bridge/Localhost 0.28
351 TestNetworkPlugins/group/bridge/HairPin 0.21
352 TestNetworkPlugins/group/kubenet/Start 48.88
354 TestStartStop/group/old-k8s-version/serial/FirstStart 154.7
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
356 TestNetworkPlugins/group/kubenet/NetCatPod 12.35
357 TestNetworkPlugins/group/kubenet/DNS 0.25
358 TestNetworkPlugins/group/kubenet/Localhost 0.22
359 TestNetworkPlugins/group/kubenet/HairPin 0.26
361 TestStartStop/group/embed-certs/serial/FirstStart 78.86
362 TestStartStop/group/embed-certs/serial/DeployApp 8.33
363 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
365 TestStartStop/group/embed-certs/serial/Stop 10.92
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
367 TestStartStop/group/old-k8s-version/serial/Stop 11.14
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
369 TestStartStop/group/embed-certs/serial/SecondStart 298.89
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
371 TestStartStop/group/old-k8s-version/serial/SecondStart 147.69
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
375 TestStartStop/group/old-k8s-version/serial/Pause 2.65
377 TestStartStop/group/no-preload/serial/FirstStart 52.27
378 TestStartStop/group/no-preload/serial/DeployApp 9.35
379 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
380 TestStartStop/group/no-preload/serial/Stop 10.86
381 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
382 TestStartStop/group/no-preload/serial/SecondStart 289.05
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/embed-certs/serial/Pause 2.77
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.69
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.08
394 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
396 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/no-preload/serial/Pause 2.78
399 TestStartStop/group/newest-cni/serial/FirstStart 39.41
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
402 TestStartStop/group/newest-cni/serial/Stop 10.98
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
404 TestStartStop/group/newest-cni/serial/SecondStart 19.57
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
408 TestStartStop/group/newest-cni/serial/Pause 3.19
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.64
x
+
TestDownloadOnly/v1.20.0/json-events (7.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-739605 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-739605 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.244034499s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:14:47.972205    7598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0927 00:14:47.972279    7598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-739605
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-739605: exit status 85 (75.022799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |          |
	|         | -p download-only-739605        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:40.770044    7603 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:40.770225    7603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:40.770251    7603 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:40.770271    7603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:40.770568    7603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	W0927 00:14:40.770724    7603 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-2273/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-2273/.minikube/config/config.json: no such file or directory
	I0927 00:14:40.771159    7603 out.go:352] Setting JSON to true
	I0927 00:14:40.771973    7603 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3432,"bootTime":1727392649,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0927 00:14:40.772060    7603 start.go:139] virtualization:  
	I0927 00:14:40.775161    7603 out.go:97] [download-only-739605] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0927 00:14:40.775282    7603 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:40.775311    7603 notify.go:220] Checking for updates...
	I0927 00:14:40.777553    7603 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:40.779416    7603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:40.781160    7603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:14:40.782937    7603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	I0927 00:14:40.784711    7603 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:14:40.788158    7603 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:40.788394    7603 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:40.808899    7603 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:40.809048    7603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:41.129419    7603 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:14:41.120351029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:41.129520    7603 docker.go:318] overlay module found
	I0927 00:14:41.131618    7603 out.go:97] Using the docker driver based on user configuration
	I0927 00:14:41.131644    7603 start.go:297] selected driver: docker
	I0927 00:14:41.131651    7603 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:41.131764    7603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:41.187793    7603 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:14:41.17920207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:41.188000    7603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:41.188360    7603 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:14:41.188540    7603 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:41.190979    7603 out.go:169] Using Docker driver with root privileges
	I0927 00:14:41.192882    7603 cni.go:84] Creating CNI manager for ""
	I0927 00:14:41.192947    7603 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 00:14:41.193017    7603 start.go:340] cluster config:
	{Name:download-only-739605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-739605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:41.194750    7603 out.go:97] Starting "download-only-739605" primary control-plane node in "download-only-739605" cluster
	I0927 00:14:41.194773    7603 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 00:14:41.196633    7603 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:14:41.196672    7603 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 00:14:41.196767    7603 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:14:41.211121    7603 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:14:41.211317    7603 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:14:41.211417    7603 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:14:41.251649    7603 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 00:14:41.251684    7603 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:41.251819    7603 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 00:14:41.253993    7603 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 00:14:41.254012    7603 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 00:14:41.340625    7603 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-739605 host does not exist
	  To start a cluster, run: "minikube start -p download-only-739605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-739605
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-574047 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-574047 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.422456222s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:14:53.783334    7598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:14:53.783371    7598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-574047
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-574047: exit status 85 (68.696428ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-739605        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-739605        | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only        | download-only-574047 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-574047        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:48.401471    7804 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:48.401691    7804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:48.401718    7804 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:48.401737    7804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:48.402002    7804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:14:48.402422    7804 out.go:352] Setting JSON to true
	I0927 00:14:48.403201    7804 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3440,"bootTime":1727392649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0927 00:14:48.403295    7804 start.go:139] virtualization:  
	I0927 00:14:48.405589    7804 out.go:97] [download-only-574047] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:14:48.405830    7804 notify.go:220] Checking for updates...
	I0927 00:14:48.407213    7804 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:48.409020    7804 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:48.410494    7804 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:14:48.412457    7804 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	I0927 00:14:48.414093    7804 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:14:48.417307    7804 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:48.417617    7804 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:48.445729    7804 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:14:48.445834    7804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:48.496855    7804 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:14:48.487622776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:48.496964    7804 docker.go:318] overlay module found
	I0927 00:14:48.498699    7804 out.go:97] Using the docker driver based on user configuration
	I0927 00:14:48.498721    7804 start.go:297] selected driver: docker
	I0927 00:14:48.498727    7804 start.go:901] validating driver "docker" against <nil>
	I0927 00:14:48.498827    7804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:14:48.553379    7804 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:14:48.544645068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:14:48.553582    7804 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:48.553858    7804 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:14:48.554007    7804 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:48.556310    7804 out.go:169] Using Docker driver with root privileges
	I0927 00:14:48.558044    7804 cni.go:84] Creating CNI manager for ""
	I0927 00:14:48.558103    7804 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:14:48.558117    7804 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:14:48.558189    7804 start.go:340] cluster config:
	{Name:download-only-574047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-574047 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:48.560214    7804 out.go:97] Starting "download-only-574047" primary control-plane node in "download-only-574047" cluster
	I0927 00:14:48.560234    7804 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 00:14:48.562142    7804 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:14:48.562164    7804 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:48.562265    7804 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:14:48.576829    7804 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:14:48.576942    7804 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:14:48.576977    7804 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:14:48.576985    7804 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:14:48.576993    7804 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:14:48.615674    7804 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 00:14:48.615695    7804 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:48.615844    7804 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:48.618095    7804 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 00:14:48.618118    7804 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0927 00:14:48.700343    7804 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 00:14:52.260404    7804 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0927 00:14:52.260532    7804 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0927 00:14:53.004672    7804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 00:14:53.005056    7804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/download-only-574047/config.json ...
	I0927 00:14:53.005090    7804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/download-only-574047/config.json: {Name:mkf4e90d1c5d31237c8e6fadfad8a9bb879c0b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:14:53.005264    7804 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:53.005423    7804 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19711-2273/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-574047 host does not exist
	  To start a cluster, run: "minikube start -p download-only-574047"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-574047
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:14:54.927933    7598 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-571152 --alsologtostderr --binary-mirror http://127.0.0.1:40555 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-571152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-571152
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (52.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-932068 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-932068 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (50.750392717s)
helpers_test.go:175: Cleaning up "offline-docker-932068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-932068
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-932068: (2.092814906s)
--- PASS: TestOffline (52.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-835847
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-835847: exit status 85 (69.908967ms)

                                                
                                                
-- stdout --
	* Profile "addons-835847" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-835847"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-835847
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-835847: exit status 85 (66.010456ms)

                                                
                                                
-- stdout --
	* Profile "addons-835847" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-835847"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-835847 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-835847 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m39.549833245s)
--- PASS: TestAddons/Setup (219.55s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 51.58589ms
addons_test.go:843: volcano-admission stabilized in 52.01419ms
addons_test.go:851: volcano-controller stabilized in 52.248015ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-xmfpn" [19196d48-e779-45eb-8674-87ecfec3d189] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003309379s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-hhdnc" [933e5124-e906-44fc-b7e8-e5c6fae052fc] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004229967s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-gbrzb" [43119fc9-75f2-44c6-9286-a352f6287c30] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004118224s
addons_test.go:870: (dbg) Run:  kubectl --context addons-835847 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-835847 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-835847 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a842e2fe-1edb-40e6-a969-a11ac534044a] Pending
helpers_test.go:344: "test-job-nginx-0" [a842e2fe-1edb-40e6-a969-a11ac534044a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a842e2fe-1edb-40e6-a969-a11ac534044a] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004331346s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable volcano --alsologtostderr -v=1: (11.063526075s)
--- PASS: TestAddons/serial/Volcano (40.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-835847 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-835847 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-835847 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-835847 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-835847 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [36e928e6-be97-4552-bd7c-9b1126e247cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [36e928e6-be97-4552-bd7c-9b1126e247cd] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.004010667s
I0927 00:28:44.383489    7598 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-835847 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable ingress --alsologtostderr -v=1: (7.685856233s)
--- PASS: TestAddons/parallel/Ingress (17.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vgn99" [cbd23a85-2c2b-47b4-9ce5-03fe40bfdebc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003694438s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-835847
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-835847: (5.717744744s)
--- PASS: TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.451283ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5ck7c" [b1561527-6ede-4c7d-89b0-dc3e89f14879] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004328544s
addons_test.go:413: (dbg) Run:  kubectl --context addons-835847 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 00:27:36.565703    7598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:27:36.570789    7598 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:27:36.570815    7598 kapi.go:107] duration metric: took 5.756304ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.765116ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-835847 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-835847 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [50ebac32-ac43-4c9e-91e9-591ee1b058a1] Pending
helpers_test.go:344: "task-pv-pod" [50ebac32-ac43-4c9e-91e9-591ee1b058a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [50ebac32-ac43-4c9e-91e9-591ee1b058a1] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004814264s
addons_test.go:528: (dbg) Run:  kubectl --context addons-835847 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-835847 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-835847 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-835847 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-835847 delete pod task-pv-pod: (1.596537526s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-835847 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-835847 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-835847 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9e03e149-a989-435f-902c-4bf63061d816] Pending
helpers_test.go:344: "task-pv-pod-restore" [9e03e149-a989-435f-902c-4bf63061d816] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9e03e149-a989-435f-902c-4bf63061d816] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003323176s
addons_test.go:570: (dbg) Run:  kubectl --context addons-835847 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-835847 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-835847 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.639271311s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-835847 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-2dgpp" [2753f582-6b7a-4842-ae65-5f4fa70ea470] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-2dgpp" [2753f582-6b7a-4842-ae65-5f4fa70ea470] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-2dgpp" [2753f582-6b7a-4842-ae65-5f4fa70ea470] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003970389s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable headlamp --alsologtostderr -v=1: (5.645404616s)
--- PASS: TestAddons/parallel/Headlamp (17.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-nmwkl" [04eee759-7621-4453-a882-860753c14126] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004224642s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-835847
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-835847 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-835847 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [151d241d-e461-4ad7-8cc8-b7aefc32303e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [151d241d-e461-4ad7-8cc8-b7aefc32303e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [151d241d-e461-4ad7-8cc8-b7aefc32303e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003954566s
addons_test.go:938: (dbg) Run:  kubectl --context addons-835847 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 ssh "cat /opt/local-path-provisioner/pvc-cbe6b315-6250-41be-8637-3b1045803afa_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-835847 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-835847 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.019991936s)
--- PASS: TestAddons/parallel/LocalPath (53.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pxf2p" [29a178e4-9317-46ca-b2a2-4a1fa8ca2860] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004310012s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-835847
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5zgdz" [2c304f3c-959b-4490-b330-070c469fec4b] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003557427s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-835847 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 addons disable yakd --alsologtostderr -v=1: (5.864138726s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-835847
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-835847: (5.765125776s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-835847
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-835847
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-835847
--- PASS: TestAddons/StoppedEnableDisable (6.01s)

                                                
                                    
x
+
TestCertOptions (43.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-531609 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-531609 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.249058724s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-531609 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-531609 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-531609 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-531609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-531609
E0927 01:08:35.120564    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-531609: (1.985145474s)
--- PASS: TestCertOptions (43.91s)

                                                
                                    
x
+
TestCertExpiration (247.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-526861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-526861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (42.026073432s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-526861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0927 01:11:29.952264    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-526861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (22.754134043s)
helpers_test.go:175: Cleaning up "cert-expiration-526861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-526861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-526861: (2.683445071s)
--- PASS: TestCertExpiration (247.46s)

                                                
                                    
x
+
TestDockerFlags (43.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-482217 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-482217 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.949353206s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-482217 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-482217 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-482217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-482217
E0927 01:07:52.043569    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-482217: (2.54546838s)
--- PASS: TestDockerFlags (43.11s)

                                                
                                    
x
+
TestForceSystemdFlag (44.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-724084 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-724084 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.832072861s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-724084 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-724084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-724084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-724084: (2.406966978s)
--- PASS: TestForceSystemdFlag (44.81s)

                                                
                                    
x
+
TestForceSystemdEnv (39.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-832276 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-832276 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.506368174s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-832276 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-832276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-832276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-832276: (2.501902044s)
--- PASS: TestForceSystemdEnv (39.63s)

                                                
                                    
x
+
TestErrorSpam/setup (30.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-244519 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244519 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-244519 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244519 --driver=docker  --container-runtime=docker: (30.654310002s)
--- PASS: TestErrorSpam/setup (30.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 unpause
--- PASS: TestErrorSpam/unpause (1.46s)

                                                
                                    
x
+
TestErrorSpam/stop (10.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 stop: (10.734175247s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-244519 --log_dir /tmp/nospam-244519 stop
--- PASS: TestErrorSpam/stop (10.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-2273/.minikube/files/etc/test/nested/copy/7598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-787765 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.063080997s)
--- PASS: TestFunctional/serial/StartWithProxy (40.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:31:23.591281    7598 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-787765 --alsologtostderr -v=8: (31.888976256s)
functional_test.go:663: soft start took 31.890724344s for "functional-787765" cluster.
I0927 00:31:55.480549    7598 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (31.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-787765 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 cache add registry.k8s.io/pause:3.1: (1.107548561s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 cache add registry.k8s.io/pause:3.3: (1.089173382s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-787765 /tmp/TestFunctionalserialCacheCmdcacheadd_local1079789377/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache add minikube-local-cache-test:functional-787765
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache delete minikube-local-cache-test:functional-787765
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-787765
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.219903ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 kubectl -- --context functional-787765 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-787765 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-787765 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.0751316s)
functional_test.go:761: restart took 41.075235746s for "functional-787765" cluster.
I0927 00:32:43.104238    7598 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-787765 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 logs: (1.076584125s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 logs --file /tmp/TestFunctionalserialLogsFileCmd2569167638/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 logs --file /tmp/TestFunctionalserialLogsFileCmd2569167638/001/logs.txt: (1.169979749s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-787765 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-787765
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-787765: exit status 115 (555.916437ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31625 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-787765 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 config get cpus: exit status 14 (69.815303ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 config get cpus: exit status 14 (71.32541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787765 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787765 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49087: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787765 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (185.503293ms)

                                                
                                                
-- stdout --
	* [functional-787765] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:33:24.290612   48779 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:24.290737   48779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:24.290789   48779 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:24.290795   48779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:24.291070   48779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:33:24.291638   48779 out.go:352] Setting JSON to false
	I0927 00:33:24.292748   48779 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4556,"bootTime":1727392649,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0927 00:33:24.292821   48779 start.go:139] virtualization:  
	I0927 00:33:24.296262   48779 out.go:177] * [functional-787765] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:33:24.299404   48779 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:33:24.299465   48779 notify.go:220] Checking for updates...
	I0927 00:33:24.303494   48779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:24.305206   48779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:33:24.306804   48779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	I0927 00:33:24.308677   48779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:33:24.310554   48779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:33:24.312905   48779 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:33:24.313478   48779 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:24.343306   48779 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:24.343430   48779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:24.412361   48779 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:33:24.403096538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:24.412487   48779 docker.go:318] overlay module found
	I0927 00:33:24.414569   48779 out.go:177] * Using the docker driver based on existing profile
	I0927 00:33:24.416665   48779 start.go:297] selected driver: docker
	I0927 00:33:24.416693   48779 start.go:901] validating driver "docker" against &{Name:functional-787765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-787765 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:24.416798   48779 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:33:24.419521   48779 out.go:201] 
	W0927 00:33:24.421889   48779 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:33:24.423719   48779 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787765 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787765 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.146744ms)

                                                
                                                
-- stdout --
	* [functional-787765] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:33:24.114234   48738 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:24.114352   48738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:24.114364   48738 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:24.114371   48738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:24.115258   48738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:33:24.115644   48738 out.go:352] Setting JSON to false
	I0927 00:33:24.116726   48738 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4555,"bootTime":1727392649,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0927 00:33:24.116803   48738 start.go:139] virtualization:  
	I0927 00:33:24.119503   48738 out.go:177] * [functional-787765] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0927 00:33:24.121748   48738 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:33:24.121908   48738 notify.go:220] Checking for updates...
	I0927 00:33:24.125636   48738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:24.127705   48738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	I0927 00:33:24.129363   48738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	I0927 00:33:24.131422   48738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:33:24.133300   48738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:33:24.135468   48738 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:33:24.135981   48738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:24.161846   48738 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:24.161961   48738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:24.228619   48738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:33:24.219255111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:24.228730   48738 docker.go:318] overlay module found
	I0927 00:33:24.230636   48738 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 00:33:24.232236   48738 start.go:297] selected driver: docker
	I0927 00:33:24.232252   48738 start.go:901] validating driver "docker" against &{Name:functional-787765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-787765 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:24.232362   48738 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:33:24.234704   48738 out.go:201] 
	W0927 00:33:24.236754   48738 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:33:24.238431   48738 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-787765 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-787765 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-rchzp" [14ead96a-474e-4838-a91f-c88c47193fa7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-rchzp" [14ead96a-474e-4838-a91f-c88c47193fa7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.002901503s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30801
functional_test.go:1675: http://192.168.49.2:30801: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-rchzp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30801
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8473e81e-d144-4e70-9dd4-18c111fdb22b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003674288s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-787765 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-787765 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-787765 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787765 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7510de98-e715-4b73-b726-967e19daa6f7] Pending
helpers_test.go:344: "sp-pod" [7510de98-e715-4b73-b726-967e19daa6f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7510de98-e715-4b73-b726-967e19daa6f7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00408872s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-787765 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-787765 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787765 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c55d4460-badd-4c2b-885c-7d2b5de90a28] Pending
helpers_test.go:344: "sp-pod" [c55d4460-badd-4c2b-885c-7d2b5de90a28] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c55d4460-badd-4c2b-885c-7d2b5de90a28] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003740337s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-787765 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh -n functional-787765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cp functional-787765:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd491977293/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh -n functional-787765 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh -n functional-787765 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7598/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /etc/test/nested/copy/7598/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /etc/ssl/certs/7598.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /usr/share/ca-certificates/7598.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /etc/ssl/certs/75982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /usr/share/ca-certificates/75982.pem"
E0927 00:33:40.255089    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-787765 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh "sudo systemctl is-active crio": exit status 1 (348.785728ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46093: os: process already finished
helpers_test.go:508: unable to kill pid 45894: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-787765 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1e408c66-5626-48bd-860c-da1b3e9c3a78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1e408c66-5626-48bd-860c-da1b3e9c3a78] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.005442292s
I0927 00:33:00.486982    7598 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-787765 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.220.106 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-787765 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-787765 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-787765 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9xmmm" [498d49bb-2644-4763-bf33-ee2c8a5eea8b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9xmmm" [498d49bb-2644-4763-bf33-ee2c8a5eea8b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004073813s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.318664ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.41521ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "341.96744ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "50.889581ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdany-port2899952435/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397200172608447" to /tmp/TestFunctionalparallelMountCmdany-port2899952435/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397200172608447" to /tmp/TestFunctionalparallelMountCmdany-port2899952435/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397200172608447" to /tmp/TestFunctionalparallelMountCmdany-port2899952435/001/test-1727397200172608447
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.667533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:33:20.475164    7598 retry.go:31] will retry after 606.298978ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:33 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:33 test-1727397200172608447
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh cat /mount-9p/test-1727397200172608447
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-787765 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cebe6ccf-3ea8-4533-b01d-3286e5b14e58] Pending
helpers_test.go:344: "busybox-mount" [cebe6ccf-3ea8-4533-b01d-3286e5b14e58] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cebe6ccf-3ea8-4533-b01d-3286e5b14e58] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cebe6ccf-3ea8-4533-b01d-3286e5b14e58] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003420915s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-787765 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdany-port2899952435/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service list -o json
functional_test.go:1494: Took "549.328145ms" to run "out/minikube-linux-arm64 -p functional-787765 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30770
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30770
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdspecific-port2411551285/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.729552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:33:28.649841    7598 retry.go:31] will retry after 692.592265ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdspecific-port2411551285/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh "sudo umount -f /mount-9p": exit status 1 (291.85525ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-787765 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdspecific-port2411551285/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T" /mount1: exit status 1 (694.945614ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:33:31.145222    7598 retry.go:31] will retry after 602.001339ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-787765 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787765 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2273713963/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 version -o=json --components: (1.048263236s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787765 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-787765
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-787765
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787765 image ls --format short --alsologtostderr:
I0927 00:33:40.812227   51893 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:40.812433   51893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:40.812460   51893 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:40.812481   51893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:40.812771   51893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:33:40.813432   51893 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:40.813602   51893 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:40.814124   51893 cli_runner.go:164] Run: docker container inspect functional-787765 --format={{.State.Status}}
I0927 00:33:40.849202   51893 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:40.849253   51893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787765
I0927 00:33:40.876247   51893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/functional-787765/id_rsa Username:docker}
I0927 00:33:40.968784   51893 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787765 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-787765 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-787765 | e4262a1275ba3 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787765 image ls --format table --alsologtostderr:
I0927 00:33:41.566555   52135 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:41.566806   52135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.566841   52135 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:41.566866   52135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.567164   52135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:33:41.567951   52135 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.568154   52135 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.568695   52135 cli_runner.go:164] Run: docker container inspect functional-787765 --format={{.State.Status}}
I0927 00:33:41.589244   52135 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:41.589302   52135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787765
I0927 00:33:41.611021   52135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/functional-787765/id_rsa Username:docker}
I0927 00:33:41.712590   52135 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787765 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa
8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-
scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-787765"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"e4262a1275ba35d38b86825296be549c1efca0ccc8a1bac38805412c1a6b4bbf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-787765"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"si
ze":"94700000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787765 image ls --format json --alsologtostderr:
I0927 00:33:41.332513   52051 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:41.332659   52051 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.332664   52051 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:41.332669   52051 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.332918   52051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:33:41.333535   52051 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.335435   52051 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.335975   52051 cli_runner.go:164] Run: docker container inspect functional-787765 --format={{.State.Status}}
I0927 00:33:41.357193   52051 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:41.357248   52051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787765
I0927 00:33:41.375169   52051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/functional-787765/id_rsa Username:docker}
I0927 00:33:41.465338   52051 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787765 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-787765
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: e4262a1275ba35d38b86825296be549c1efca0ccc8a1bac38805412c1a6b4bbf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-787765
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787765 image ls --format yaml --alsologtostderr:
I0927 00:33:41.073854   51985 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:41.074005   51985 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.074030   51985 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:41.074044   51985 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.074407   51985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:33:41.075372   51985 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.075567   51985 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.076373   51985 cli_runner.go:164] Run: docker container inspect functional-787765 --format={{.State.Status}}
I0927 00:33:41.103820   51985 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:41.103875   51985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787765
I0927 00:33:41.122934   51985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/functional-787765/id_rsa Username:docker}
I0927 00:33:41.212241   51985 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787765 ssh pgrep buildkitd: exit status 1 (331.385932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image build -t localhost/my-image:functional-787765 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-787765 image build -t localhost/my-image:functional-787765 testdata/build --alsologtostderr: (2.738735103s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787765 image build -t localhost/my-image:functional-787765 testdata/build --alsologtostderr:
I0927 00:33:41.258405   52036 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:41.258617   52036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.258648   52036 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:41.258667   52036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:41.258924   52036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:33:41.259574   52036 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.260956   52036 config.go:182] Loaded profile config "functional-787765": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:41.261517   52036 cli_runner.go:164] Run: docker container inspect functional-787765 --format={{.State.Status}}
I0927 00:33:41.284558   52036 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:41.284663   52036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787765
I0927 00:33:41.321711   52036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/functional-787765/id_rsa Username:docker}
I0927 00:33:41.416659   52036 build_images.go:161] Building image from path: /tmp/build.2468707293.tar
I0927 00:33:41.416725   52036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:33:41.427836   52036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2468707293.tar
I0927 00:33:41.431148   52036 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2468707293.tar: stat -c "%s %y" /var/lib/minikube/build/build.2468707293.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2468707293.tar': No such file or directory
I0927 00:33:41.431180   52036 ssh_runner.go:362] scp /tmp/build.2468707293.tar --> /var/lib/minikube/build/build.2468707293.tar (3072 bytes)
I0927 00:33:41.459412   52036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2468707293
I0927 00:33:41.473811   52036 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2468707293 -xf /var/lib/minikube/build/build.2468707293.tar
I0927 00:33:41.491274   52036 docker.go:360] Building image: /var/lib/minikube/build/build.2468707293
I0927 00:33:41.491367   52036 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-787765 /var/lib/minikube/build/build.2468707293
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a85b577673dca0e64481de69b8ac271e0a7c9bbc78d71dd8d13a86217ec06b17 done
#8 naming to localhost/my-image:functional-787765 done
#8 DONE 0.1s
I0927 00:33:43.905560   52036 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-787765 /var/lib/minikube/build/build.2468707293: (2.414170404s)
I0927 00:33:43.905630   52036 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2468707293
I0927 00:33:43.914140   52036 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2468707293.tar
I0927 00:33:43.922212   52036 build_images.go:217] Built localhost/my-image:functional-787765 from /tmp/build.2468707293.tar
I0927 00:33:43.922246   52036 build_images.go:133] succeeded building to: functional-787765
I0927 00:33:43.922252   52036 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-787765
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image load --daemon kicbase/echo-server:functional-787765 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
E0927 00:33:35.121726    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.128841    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.140239    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.162413    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.204797    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.286180    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image load --daemon kicbase/echo-server:functional-787765 --alsologtostderr
E0927 00:33:35.447865    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.769206    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-787765
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image load --daemon kicbase/echo-server:functional-787765 --alsologtostderr
E0927 00:33:36.411475    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
2024/09/27 00:33:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-787765 docker-env) && out/minikube-linux-arm64 status -p functional-787765"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-787765 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image save kicbase/echo-server:functional-787765 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image rm kicbase/echo-server:functional-787765 --alsologtostderr
E0927 00:33:37.693773    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-787765
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-787765 image save --daemon kicbase/echo-server:functional-787765 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-787765
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-787765
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-787765
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-787765
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-230781 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:33:55.618300    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:34:16.099690    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:34:57.062079    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-230781 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m0.458514701s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-230781 -- rollout status deployment/busybox: (5.289809764s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:35:53.788020    7598 retry.go:31] will retry after 1.189651395s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:35:55.154321    7598 retry.go:31] will retry after 850.06674ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:35:56.186430    7598 retry.go:31] will retry after 1.392749559s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:35:57.744389    7598 retry.go:31] will retry after 2.405911245s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:36:00.343995    7598 retry.go:31] will retry after 6.053691931s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:36:06.549824    7598 retry.go:31] will retry after 8.739369418s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 00:36:15.440264    7598 retry.go:31] will retry after 12.050418149s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0927 00:36:18.985829    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-5zzh9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-d7rl7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-rnxz6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-5zzh9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-d7rl7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-rnxz6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-5zzh9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-d7rl7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-rnxz6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-5zzh9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-5zzh9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-d7rl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-d7rl7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-rnxz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-230781 -- exec busybox-7dff88458-rnxz6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-230781 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-230781 -v=7 --alsologtostderr: (24.040196151s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-230781 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-230781 status --output json -v=7 --alsologtostderr: (1.017300025s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp testdata/cp-test.txt ha-230781:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2769413001/001/cp-test_ha-230781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781:/home/docker/cp-test.txt ha-230781-m02:/home/docker/cp-test_ha-230781_ha-230781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test_ha-230781_ha-230781-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781:/home/docker/cp-test.txt ha-230781-m03:/home/docker/cp-test_ha-230781_ha-230781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test_ha-230781_ha-230781-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781:/home/docker/cp-test.txt ha-230781-m04:/home/docker/cp-test_ha-230781_ha-230781-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test_ha-230781_ha-230781-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp testdata/cp-test.txt ha-230781-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2769413001/001/cp-test_ha-230781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m02:/home/docker/cp-test.txt ha-230781:/home/docker/cp-test_ha-230781-m02_ha-230781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test_ha-230781-m02_ha-230781.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m02:/home/docker/cp-test.txt ha-230781-m03:/home/docker/cp-test_ha-230781-m02_ha-230781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test_ha-230781-m02_ha-230781-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m02:/home/docker/cp-test.txt ha-230781-m04:/home/docker/cp-test_ha-230781-m02_ha-230781-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test_ha-230781-m02_ha-230781-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp testdata/cp-test.txt ha-230781-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2769413001/001/cp-test_ha-230781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m03:/home/docker/cp-test.txt ha-230781:/home/docker/cp-test_ha-230781-m03_ha-230781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test_ha-230781-m03_ha-230781.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m03:/home/docker/cp-test.txt ha-230781-m02:/home/docker/cp-test_ha-230781-m03_ha-230781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test_ha-230781-m03_ha-230781-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m03:/home/docker/cp-test.txt ha-230781-m04:/home/docker/cp-test_ha-230781-m03_ha-230781-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test_ha-230781-m03_ha-230781-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp testdata/cp-test.txt ha-230781-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2769413001/001/cp-test_ha-230781-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m04:/home/docker/cp-test.txt ha-230781:/home/docker/cp-test_ha-230781-m04_ha-230781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781 "sudo cat /home/docker/cp-test_ha-230781-m04_ha-230781.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m04:/home/docker/cp-test.txt ha-230781-m02:/home/docker/cp-test_ha-230781-m04_ha-230781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m02 "sudo cat /home/docker/cp-test_ha-230781-m04_ha-230781-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 cp ha-230781-m04:/home/docker/cp-test.txt ha-230781-m03:/home/docker/cp-test_ha-230781-m04_ha-230781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 ssh -n ha-230781-m03 "sudo cat /home/docker/cp-test_ha-230781-m04_ha-230781-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-230781 node stop m02 -v=7 --alsologtostderr: (11.014029313s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr: exit status 7 (821.24557ms)

                                                
                                                
-- stdout --
	ha-230781
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230781-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230781-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230781-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:37:27.247245   74720 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:37:27.247374   74720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:27.247385   74720 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:27.247390   74720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:27.247639   74720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:37:27.247805   74720 out.go:352] Setting JSON to false
	I0927 00:37:27.247831   74720 mustload.go:65] Loading cluster: ha-230781
	I0927 00:37:27.247939   74720 notify.go:220] Checking for updates...
	I0927 00:37:27.248336   74720 config.go:182] Loaded profile config "ha-230781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:37:27.248357   74720 status.go:174] checking status of ha-230781 ...
	I0927 00:37:27.248942   74720 cli_runner.go:164] Run: docker container inspect ha-230781 --format={{.State.Status}}
	I0927 00:37:27.268365   74720 status.go:364] ha-230781 host status = "Running" (err=<nil>)
	I0927 00:37:27.268390   74720 host.go:66] Checking if "ha-230781" exists ...
	I0927 00:37:27.268717   74720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-230781
	I0927 00:37:27.296227   74720 host.go:66] Checking if "ha-230781" exists ...
	I0927 00:37:27.296611   74720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:37:27.296653   74720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-230781
	I0927 00:37:27.317468   74720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/ha-230781/id_rsa Username:docker}
	I0927 00:37:27.409236   74720 ssh_runner.go:195] Run: systemctl --version
	I0927 00:37:27.413476   74720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:27.425888   74720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:37:27.483461   74720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-27 00:37:27.473714857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:37:27.484048   74720 kubeconfig.go:125] found "ha-230781" server: "https://192.168.49.254:8443"
	I0927 00:37:27.484134   74720 api_server.go:166] Checking apiserver status ...
	I0927 00:37:27.484221   74720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:37:27.495764   74720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2313/cgroup
	I0927 00:37:27.505140   74720 api_server.go:182] apiserver freezer: "5:freezer:/docker/17a20f8721754e2b7b2537db579c0f4a2e706b4d1ddc80291cb8bb2a7428389b/kubepods/burstable/pod1d466d7565aeda73680a626456be4821/5a1d64de879e7fc076937eafb52d9d8b70a23905b6f4a973f35d27b02d0c623c"
	I0927 00:37:27.505211   74720 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/17a20f8721754e2b7b2537db579c0f4a2e706b4d1ddc80291cb8bb2a7428389b/kubepods/burstable/pod1d466d7565aeda73680a626456be4821/5a1d64de879e7fc076937eafb52d9d8b70a23905b6f4a973f35d27b02d0c623c/freezer.state
	I0927 00:37:27.514856   74720 api_server.go:204] freezer state: "THAWED"
	I0927 00:37:27.514888   74720 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:37:27.525198   74720 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:37:27.525231   74720 status.go:456] ha-230781 apiserver status = Running (err=<nil>)
	I0927 00:37:27.525243   74720 status.go:176] ha-230781 status: &{Name:ha-230781 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:37:27.525259   74720 status.go:174] checking status of ha-230781-m02 ...
	I0927 00:37:27.525581   74720 cli_runner.go:164] Run: docker container inspect ha-230781-m02 --format={{.State.Status}}
	I0927 00:37:27.542088   74720 status.go:364] ha-230781-m02 host status = "Stopped" (err=<nil>)
	I0927 00:37:27.542109   74720 status.go:377] host is not running, skipping remaining checks
	I0927 00:37:27.542116   74720 status.go:176] ha-230781-m02 status: &{Name:ha-230781-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:37:27.542135   74720 status.go:174] checking status of ha-230781-m03 ...
	I0927 00:37:27.542444   74720 cli_runner.go:164] Run: docker container inspect ha-230781-m03 --format={{.State.Status}}
	I0927 00:37:27.576536   74720 status.go:364] ha-230781-m03 host status = "Running" (err=<nil>)
	I0927 00:37:27.576559   74720 host.go:66] Checking if "ha-230781-m03" exists ...
	I0927 00:37:27.576860   74720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-230781-m03
	I0927 00:37:27.601002   74720 host.go:66] Checking if "ha-230781-m03" exists ...
	I0927 00:37:27.601371   74720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:37:27.601459   74720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-230781-m03
	I0927 00:37:27.641830   74720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/ha-230781-m03/id_rsa Username:docker}
	I0927 00:37:27.756841   74720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:27.773819   74720 kubeconfig.go:125] found "ha-230781" server: "https://192.168.49.254:8443"
	I0927 00:37:27.773912   74720 api_server.go:166] Checking apiserver status ...
	I0927 00:37:27.774032   74720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:37:27.801743   74720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2171/cgroup
	I0927 00:37:27.815505   74720 api_server.go:182] apiserver freezer: "5:freezer:/docker/d201f65ab534937bd34cc5008f8be3184b3145fade68c569472df775f69037a1/kubepods/burstable/pod29811a4a9141d27c3c54574c8af12931/9b030a4027191dc65281920ba99d1f83a0838c0baef95b73e527f8639f35a794"
	I0927 00:37:27.815614   74720 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d201f65ab534937bd34cc5008f8be3184b3145fade68c569472df775f69037a1/kubepods/burstable/pod29811a4a9141d27c3c54574c8af12931/9b030a4027191dc65281920ba99d1f83a0838c0baef95b73e527f8639f35a794/freezer.state
	I0927 00:37:27.828783   74720 api_server.go:204] freezer state: "THAWED"
	I0927 00:37:27.828851   74720 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:37:27.836775   74720 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:37:27.836839   74720 status.go:456] ha-230781-m03 apiserver status = Running (err=<nil>)
	I0927 00:37:27.836853   74720 status.go:176] ha-230781-m03 status: &{Name:ha-230781-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:37:27.836885   74720 status.go:174] checking status of ha-230781-m04 ...
	I0927 00:37:27.837186   74720 cli_runner.go:164] Run: docker container inspect ha-230781-m04 --format={{.State.Status}}
	I0927 00:37:27.870030   74720 status.go:364] ha-230781-m04 host status = "Running" (err=<nil>)
	I0927 00:37:27.870059   74720 host.go:66] Checking if "ha-230781-m04" exists ...
	I0927 00:37:27.870371   74720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-230781-m04
	I0927 00:37:27.894343   74720 host.go:66] Checking if "ha-230781-m04" exists ...
	I0927 00:37:27.894745   74720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:37:27.894797   74720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-230781-m04
	I0927 00:37:27.912344   74720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/ha-230781-m04/id_rsa Username:docker}
	I0927 00:37:28.005378   74720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:28.019897   74720 status.go:176] ha-230781-m04 status: &{Name:ha-230781-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 node start m02 -v=7 --alsologtostderr
E0927 00:37:52.044378    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.051728    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.063111    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.084666    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.126157    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.207652    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.369158    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:52.690778    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:53.332221    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:54.613697    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:57.175429    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:02.297572    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:12.539405    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:33.020975    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-230781 node start m02 -v=7 --alsologtostderr: (1m4.359103221s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (65.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0927 00:38:35.120285    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (244.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-230781 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-230781 -v=7 --alsologtostderr
E0927 00:39:02.827198    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-230781 -v=7 --alsologtostderr: (33.716477551s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-230781 --wait=true -v=7 --alsologtostderr
E0927 00:39:13.982660    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:35.904923    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-230781 --wait=true -v=7 --alsologtostderr: (3m30.508214986s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-230781
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (244.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-230781 node delete m03 -v=7 --alsologtostderr: (10.154255233s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 stop -v=7 --alsologtostderr
E0927 00:42:52.043976    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:43:19.746832    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-230781 stop -v=7 --alsologtostderr: (32.63009412s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr: exit status 7 (107.764895ms)

                                                
                                                
-- stdout --
	ha-230781
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230781-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230781-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:43:24.012326  102053 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:43:24.012502  102053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:43:24.012533  102053 out.go:358] Setting ErrFile to fd 2...
	I0927 00:43:24.012556  102053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:43:24.012834  102053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:43:24.013045  102053 out.go:352] Setting JSON to false
	I0927 00:43:24.013100  102053 mustload.go:65] Loading cluster: ha-230781
	I0927 00:43:24.013132  102053 notify.go:220] Checking for updates...
	I0927 00:43:24.013582  102053 config.go:182] Loaded profile config "ha-230781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:43:24.013620  102053 status.go:174] checking status of ha-230781 ...
	I0927 00:43:24.014547  102053 cli_runner.go:164] Run: docker container inspect ha-230781 --format={{.State.Status}}
	I0927 00:43:24.032979  102053 status.go:364] ha-230781 host status = "Stopped" (err=<nil>)
	I0927 00:43:24.033002  102053 status.go:377] host is not running, skipping remaining checks
	I0927 00:43:24.033010  102053 status.go:176] ha-230781 status: &{Name:ha-230781 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:43:24.033045  102053 status.go:174] checking status of ha-230781-m02 ...
	I0927 00:43:24.033358  102053 cli_runner.go:164] Run: docker container inspect ha-230781-m02 --format={{.State.Status}}
	I0927 00:43:24.057790  102053 status.go:364] ha-230781-m02 host status = "Stopped" (err=<nil>)
	I0927 00:43:24.057815  102053 status.go:377] host is not running, skipping remaining checks
	I0927 00:43:24.057823  102053 status.go:176] ha-230781-m02 status: &{Name:ha-230781-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:43:24.057844  102053 status.go:174] checking status of ha-230781-m04 ...
	I0927 00:43:24.058168  102053 cli_runner.go:164] Run: docker container inspect ha-230781-m04 --format={{.State.Status}}
	I0927 00:43:24.075647  102053 status.go:364] ha-230781-m04 host status = "Stopped" (err=<nil>)
	I0927 00:43:24.075669  102053 status.go:377] host is not running, skipping remaining checks
	I0927 00:43:24.075676  102053 status.go:176] ha-230781-m04 status: &{Name:ha-230781-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-230781 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:43:35.120525    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-230781 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m30.058275329s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-230781 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-230781 --control-plane -v=7 --alsologtostderr: (44.54421501s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-230781 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-944924 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-944924 --driver=docker  --container-runtime=docker: (31.30749798s)
--- PASS: TestImageBuild/serial/Setup (31.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-944924
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-944924: (1.938519395s)
--- PASS: TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-944924
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-944924: (1.082955939s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-944924
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-944924
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-502622 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-502622 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m14.993462275s)
--- PASS: TestJSONOutput/start/Command (75.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-502622 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-502622 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-502622 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-502622 --output=json --user=testUser: (5.713823032s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-489770 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-489770 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.989381ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"813b1e78-e1bd-4a13-9a5b-f0a265bf131a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-489770] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1918b02-e291-48f8-9145-6e9a509fae77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"f4a98b71-66df-464d-8a27-d7163e92fc4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"541577d4-0e4f-4624-996d-860ebf914eb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig"}}
	{"specversion":"1.0","id":"f58c39d5-45af-4c17-9e75-2880cea59edf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube"}}
	{"specversion":"1.0","id":"d7bb3b00-4314-43e8-a845-61f9c30fba4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bfd603e4-1830-4d4b-8ee2-1b6583e7fe38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d98b3028-e390-4028-a212-985da52eb7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-489770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-489770
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-785821 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-785821 --network=: (31.366709369s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-785821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-785821
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-785821: (2.07381359s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-169880 --network=bridge
E0927 00:48:35.120194    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-169880 --network=bridge: (29.716139826s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-169880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-169880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-169880: (1.578524569s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.32s)

                                                
                                    
x
+
TestKicExistingNetwork (32.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 00:48:58.344584    7598 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 00:48:58.360038    7598 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 00:48:58.360182    7598 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 00:48:58.360205    7598 cli_runner.go:164] Run: docker network inspect existing-network
W0927 00:48:58.374282    7598 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 00:48:58.374310    7598 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 00:48:58.374323    7598 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 00:48:58.374439    7598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 00:48:58.389825    7598 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74b14cc42363 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f5:7e:ad:8f} reservation:<nil>}
I0927 00:48:58.390147    7598 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bab1c0}
I0927 00:48:58.390169    7598 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 00:48:58.390218    7598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 00:48:58.459778    7598 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-804453 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-804453 --network=existing-network: (30.604002566s)
helpers_test.go:175: Cleaning up "existing-network-804453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-804453
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-804453: (1.921715744s)
I0927 00:49:31.001215    7598 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.67s)

                                                
                                    
x
+
TestKicCustomSubnet (35.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-346309 --subnet=192.168.60.0/24
E0927 00:49:58.188524    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-346309 --subnet=192.168.60.0/24: (33.686325624s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-346309 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-346309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-346309
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-346309: (2.019735392s)
--- PASS: TestKicCustomSubnet (35.73s)

                                                
                                    
x
+
TestKicStaticIP (35.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-616245 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-616245 --static-ip=192.168.200.200: (33.042847288s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-616245 ip
helpers_test.go:175: Cleaning up "static-ip-616245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-616245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-616245: (1.965995366s)
--- PASS: TestKicStaticIP (35.15s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-033112 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-033112 --driver=docker  --container-runtime=docker: (31.317636094s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-035557 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-035557 --driver=docker  --container-runtime=docker: (38.134476348s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-033112
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-035557
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-035557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-035557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-035557: (1.988950405s)
helpers_test.go:175: Cleaning up "first-033112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-033112
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-033112: (2.139888407s)
--- PASS: TestMinikubeProfile (74.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-082367 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-082367 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.161559957s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-082367 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-084213 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-084213 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.568203824s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-084213 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-082367 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-082367 --alsologtostderr -v=5: (1.488882553s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-084213 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-084213
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-084213: (1.198619804s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-084213
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-084213: (7.043825038s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-084213 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:52:52.043275    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:53:35.120255    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.687212656s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (48.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-910051 -- rollout status deployment/busybox: (4.725929408s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:53:50.370890    7598 retry.go:31] will retry after 1.014209057s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:53:51.531803    7598 retry.go:31] will retry after 2.189181951s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:53:53.871434    7598 retry.go:31] will retry after 3.310649853s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:53:57.331065    7598 retry.go:31] will retry after 3.686747249s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:54:01.155247    7598 retry.go:31] will retry after 6.799788603s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:54:08.092525    7598 retry.go:31] will retry after 6.696058239s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 00:54:14.926243    7598 retry.go:31] will retry after 16.728030747s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0927 00:54:15.108661    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-8z5wj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-fcngd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-8z5wj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-fcngd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-8z5wj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-fcngd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (48.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-8z5wj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-8z5wj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-fcngd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910051 -- exec busybox-7dff88458-fcngd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910051 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-910051 -v 3 --alsologtostderr: (17.149036681s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-910051 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp testdata/cp-test.txt multinode-910051:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1383298884/001/cp-test_multinode-910051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051:/home/docker/cp-test.txt multinode-910051-m02:/home/docker/cp-test_multinode-910051_multinode-910051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test_multinode-910051_multinode-910051-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051:/home/docker/cp-test.txt multinode-910051-m03:/home/docker/cp-test_multinode-910051_multinode-910051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test_multinode-910051_multinode-910051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp testdata/cp-test.txt multinode-910051-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1383298884/001/cp-test_multinode-910051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m02:/home/docker/cp-test.txt multinode-910051:/home/docker/cp-test_multinode-910051-m02_multinode-910051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test_multinode-910051-m02_multinode-910051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m02:/home/docker/cp-test.txt multinode-910051-m03:/home/docker/cp-test_multinode-910051-m02_multinode-910051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test_multinode-910051-m02_multinode-910051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp testdata/cp-test.txt multinode-910051-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1383298884/001/cp-test_multinode-910051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m03:/home/docker/cp-test.txt multinode-910051:/home/docker/cp-test_multinode-910051-m03_multinode-910051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051 "sudo cat /home/docker/cp-test_multinode-910051-m03_multinode-910051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 cp multinode-910051-m03:/home/docker/cp-test.txt multinode-910051-m02:/home/docker/cp-test_multinode-910051-m03_multinode-910051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 ssh -n multinode-910051-m02 "sudo cat /home/docker/cp-test_multinode-910051-m03_multinode-910051-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-910051 node stop m03: (1.199669459s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910051 status: exit status 7 (478.366283ms)

                                                
                                                
-- stdout --
	multinode-910051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr: exit status 7 (495.81215ms)

                                                
                                                
-- stdout --
	multinode-910051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:55:04.267972  176507 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:55:04.268206  176507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:55:04.268238  176507 out.go:358] Setting ErrFile to fd 2...
	I0927 00:55:04.268258  176507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:55:04.268501  176507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:55:04.268699  176507 out.go:352] Setting JSON to false
	I0927 00:55:04.268764  176507 mustload.go:65] Loading cluster: multinode-910051
	I0927 00:55:04.268842  176507 notify.go:220] Checking for updates...
	I0927 00:55:04.269221  176507 config.go:182] Loaded profile config "multinode-910051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:55:04.269265  176507 status.go:174] checking status of multinode-910051 ...
	I0927 00:55:04.270284  176507 cli_runner.go:164] Run: docker container inspect multinode-910051 --format={{.State.Status}}
	I0927 00:55:04.288847  176507 status.go:364] multinode-910051 host status = "Running" (err=<nil>)
	I0927 00:55:04.288876  176507 host.go:66] Checking if "multinode-910051" exists ...
	I0927 00:55:04.289234  176507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910051
	I0927 00:55:04.313221  176507 host.go:66] Checking if "multinode-910051" exists ...
	I0927 00:55:04.313534  176507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:55:04.313586  176507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910051
	I0927 00:55:04.336556  176507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/multinode-910051/id_rsa Username:docker}
	I0927 00:55:04.437296  176507 ssh_runner.go:195] Run: systemctl --version
	I0927 00:55:04.441617  176507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:55:04.453450  176507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:55:04.503761  176507 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-27 00:55:04.49419115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:55:04.504482  176507 kubeconfig.go:125] found "multinode-910051" server: "https://192.168.67.2:8443"
	I0927 00:55:04.504515  176507 api_server.go:166] Checking apiserver status ...
	I0927 00:55:04.504558  176507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:55:04.517872  176507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2198/cgroup
	I0927 00:55:04.527115  176507 api_server.go:182] apiserver freezer: "5:freezer:/docker/fb6d0a42574fb83bdd261cb2172f4a7af463d4397991d343f01ac5c538fb5a13/kubepods/burstable/pod073e9ffd9a717a1307bb4c1438c5a9c0/4b86ae6b525feef6eeaebf73cb1a5c4004188321b37ab1012593c3e0f405ca3a"
	I0927 00:55:04.527188  176507 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb6d0a42574fb83bdd261cb2172f4a7af463d4397991d343f01ac5c538fb5a13/kubepods/burstable/pod073e9ffd9a717a1307bb4c1438c5a9c0/4b86ae6b525feef6eeaebf73cb1a5c4004188321b37ab1012593c3e0f405ca3a/freezer.state
	I0927 00:55:04.536399  176507 api_server.go:204] freezer state: "THAWED"
	I0927 00:55:04.536424  176507 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 00:55:04.543830  176507 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 00:55:04.543857  176507 status.go:456] multinode-910051 apiserver status = Running (err=<nil>)
	I0927 00:55:04.543869  176507 status.go:176] multinode-910051 status: &{Name:multinode-910051 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:55:04.543885  176507 status.go:174] checking status of multinode-910051-m02 ...
	I0927 00:55:04.544222  176507 cli_runner.go:164] Run: docker container inspect multinode-910051-m02 --format={{.State.Status}}
	I0927 00:55:04.560422  176507 status.go:364] multinode-910051-m02 host status = "Running" (err=<nil>)
	I0927 00:55:04.560447  176507 host.go:66] Checking if "multinode-910051-m02" exists ...
	I0927 00:55:04.560737  176507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910051-m02
	I0927 00:55:04.576243  176507 host.go:66] Checking if "multinode-910051-m02" exists ...
	I0927 00:55:04.576552  176507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:55:04.576594  176507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910051-m02
	I0927 00:55:04.593112  176507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/multinode-910051-m02/id_rsa Username:docker}
	I0927 00:55:04.681262  176507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:55:04.693873  176507 status.go:176] multinode-910051-m02 status: &{Name:multinode-910051-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:55:04.693907  176507 status.go:174] checking status of multinode-910051-m03 ...
	I0927 00:55:04.694199  176507 cli_runner.go:164] Run: docker container inspect multinode-910051-m03 --format={{.State.Status}}
	I0927 00:55:04.710160  176507 status.go:364] multinode-910051-m03 host status = "Stopped" (err=<nil>)
	I0927 00:55:04.710181  176507 status.go:377] host is not running, skipping remaining checks
	I0927 00:55:04.710188  176507 status.go:176] multinode-910051-m03 status: &{Name:multinode-910051-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-910051 node start m03 -v=7 --alsologtostderr: (9.610819635s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910051
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-910051
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-910051: (22.489479338s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910051 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910051 --wait=true -v=8 --alsologtostderr: (1m33.283254159s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910051
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-910051 node delete m03: (4.783743833s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-910051 stop: (21.634635595s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910051 status: exit status 7 (92.890184ms)

                                                
                                                
-- stdout --
	multinode-910051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr: exit status 7 (100.423601ms)

                                                
                                                
-- stdout --
	multinode-910051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:57:38.212867  190034 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:57:38.213068  190034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:57:38.213096  190034 out.go:358] Setting ErrFile to fd 2...
	I0927 00:57:38.213117  190034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:57:38.213380  190034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
	I0927 00:57:38.213571  190034 out.go:352] Setting JSON to false
	I0927 00:57:38.213624  190034 mustload.go:65] Loading cluster: multinode-910051
	I0927 00:57:38.213661  190034 notify.go:220] Checking for updates...
	I0927 00:57:38.214069  190034 config.go:182] Loaded profile config "multinode-910051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:57:38.214117  190034 status.go:174] checking status of multinode-910051 ...
	I0927 00:57:38.214978  190034 cli_runner.go:164] Run: docker container inspect multinode-910051 --format={{.State.Status}}
	I0927 00:57:38.235421  190034 status.go:364] multinode-910051 host status = "Stopped" (err=<nil>)
	I0927 00:57:38.235441  190034 status.go:377] host is not running, skipping remaining checks
	I0927 00:57:38.235448  190034 status.go:176] multinode-910051 status: &{Name:multinode-910051 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:57:38.235481  190034 status.go:174] checking status of multinode-910051-m02 ...
	I0927 00:57:38.235783  190034 cli_runner.go:164] Run: docker container inspect multinode-910051-m02 --format={{.State.Status}}
	I0927 00:57:38.265063  190034 status.go:364] multinode-910051-m02 host status = "Stopped" (err=<nil>)
	I0927 00:57:38.265088  190034 status.go:377] host is not running, skipping remaining checks
	I0927 00:57:38.265095  190034 status.go:176] multinode-910051-m02 status: &{Name:multinode-910051-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910051 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 00:57:52.043607    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910051 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.884473435s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910051 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910051
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910051-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-910051-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.091947ms)

                                                
                                                
-- stdout --
	* [multinode-910051-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-910051-m02' is duplicated with machine name 'multinode-910051-m02' in profile 'multinode-910051'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910051-m03 --driver=docker  --container-runtime=docker
E0927 00:58:35.120242    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910051-m03 --driver=docker  --container-runtime=docker: (34.861394003s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910051
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-910051: exit status 80 (318.836453ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-910051 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-910051-m03 already exists in multinode-910051-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-910051-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-910051-m03: (2.09632836s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.41s)

                                                
                                    
x
+
TestPreload (148.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-385962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-385962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.687233893s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-385962 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-385962 image pull gcr.io/k8s-minikube/busybox: (2.27059801s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-385962
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-385962: (10.726220314s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-385962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-385962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (30.387974084s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-385962 image list
helpers_test.go:175: Cleaning up "test-preload-385962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-385962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-385962: (2.121360909s)
--- PASS: TestPreload (148.40s)

                                                
                                    
x
+
TestScheduledStopUnix (105.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-217091 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-217091 --memory=2048 --driver=docker  --container-runtime=docker: (32.285017482s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-217091 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-217091 -n scheduled-stop-217091
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-217091 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 01:02:12.219804    7598 retry.go:31] will retry after 142.581µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.220300    7598 retry.go:31] will retry after 187.371µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.221433    7598 retry.go:31] will retry after 125.329µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.222558    7598 retry.go:31] will retry after 189.871µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.224222    7598 retry.go:31] will retry after 575.881µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.225342    7598 retry.go:31] will retry after 656.803µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.226462    7598 retry.go:31] will retry after 1.319323ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.228601    7598 retry.go:31] will retry after 990.89µs: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.229705    7598 retry.go:31] will retry after 2.11527ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.232849    7598 retry.go:31] will retry after 3.301156ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.237026    7598 retry.go:31] will retry after 5.591713ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.243279    7598 retry.go:31] will retry after 9.264253ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.253613    7598 retry.go:31] will retry after 16.409342ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.270997    7598 retry.go:31] will retry after 24.71536ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
I0927 01:02:12.296212    7598 retry.go:31] will retry after 21.66639ms: open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/scheduled-stop-217091/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-217091 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-217091 -n scheduled-stop-217091
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-217091
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-217091 --schedule 15s
E0927 01:02:52.044006    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-217091
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-217091: exit status 7 (66.484522ms)

                                                
                                                
-- stdout --
	scheduled-stop-217091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-217091 -n scheduled-stop-217091
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-217091 -n scheduled-stop-217091: exit status 7 (70.082598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-217091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-217091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-217091: (1.621744584s)
--- PASS: TestScheduledStopUnix (105.38s)

                                                
                                    
x
+
TestSkaffold (117.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2449192319 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-655916 --memory=2600 --driver=docker  --container-runtime=docker
E0927 01:03:35.121291    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-655916 --memory=2600 --driver=docker  --container-runtime=docker: (32.269840684s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2449192319 run --minikube-profile skaffold-655916 --kube-context skaffold-655916 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2449192319 run --minikube-profile skaffold-655916 --kube-context skaffold-655916 --status-check=true --port-forward=false --interactive=false: (1m9.606710336s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-8577b8c7dc-h2pvp" [35c4167d-053b-40c4-8b3f-a357e94aa2f2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002980595s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5fdbfbfb46-85k5v" [b3072f3f-edcf-4cf0-a5a2-2969bbc4d8a9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004058124s
helpers_test.go:175: Cleaning up "skaffold-655916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-655916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-655916: (2.908863229s)
--- PASS: TestSkaffold (117.12s)

                                                
                                    
x
+
TestInsufficientStorage (10.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-243403 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-243403 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.64735761s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb807625-0882-46cc-ae26-25453abc7589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-243403] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3db3170d-2832-4991-b227-f9ee5060d5cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"96538c63-7909-4d7e-af18-7dba233bd29f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c7bef78d-287f-4147-8682-754b47ec7cb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig"}}
	{"specversion":"1.0","id":"66a1717d-58f9-44da-b1ea-d80733ae998d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube"}}
	{"specversion":"1.0","id":"f905e79c-2c5c-41f0-8224-030d9e0e726b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"65e8ec4d-9aa9-48e0-b0d3-8e0809f16a30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6108e7e8-ddf8-4ce9-a16c-627667497ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"53a1c04f-d289-488b-8b92-06e1466d290a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c47155cd-61a6-4295-8ab6-b36a2edfd2b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbaa719b-0743-4b26-a60c-e1c84acace2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ce8b6f1d-a5ba-4dbd-b900-3bc199ee14ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-243403\" primary control-plane node in \"insufficient-storage-243403\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1004fdb-75c6-41e9-9ad1-b0f4ccccd6d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2f54a40-e158-4cf2-bbc0-4f520565f1c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae494b10-edfb-4a86-9da0-56f2014f820a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-243403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-243403 --output=json --layout=cluster: exit status 7 (274.329631ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-243403","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-243403","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:05:30.854638  224415 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-243403" does not appear in /home/jenkins/minikube-integration/19711-2273/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-243403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-243403 --output=json --layout=cluster: exit status 7 (273.221902ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-243403","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-243403","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:05:31.127998  224473 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-243403" does not appear in /home/jenkins/minikube-integration/19711-2273/kubeconfig
	E0927 01:05:31.138241  224473 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/insufficient-storage-243403/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-243403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-243403
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-243403: (1.676985893s)
--- PASS: TestInsufficientStorage (10.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4045028516 start -p running-upgrade-256548 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4045028516 start -p running-upgrade-256548 --memory=2200 --vm-driver=docker  --container-runtime=docker: (46.081426951s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-256548 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 01:13:35.120825    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-256548 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.691501089s)
helpers_test.go:175: Cleaning up "running-upgrade-256548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-256548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-256548: (2.206851738s)
--- PASS: TestRunningBinaryUpgrade (81.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.085307351s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-308142
E0927 01:12:51.873665    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:12:52.044209    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-308142: (10.980366476s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-308142 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-308142 status --format={{.Host}}: exit status 7 (92.155508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m43.23837936s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-308142 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (112.41571ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-308142] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-308142
	    minikube start -p kubernetes-upgrade-308142 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3081422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-308142 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 01:17:52.044305    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308142 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.545921053s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-308142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-308142
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-308142: (2.838688692s)
--- PASS: TestKubernetesUpgrade (384.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3930136207 start -p missing-upgrade-574235 --memory=2200 --driver=docker  --container-runtime=docker
E0927 01:10:48.990129    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:55.112234    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3930136207 start -p missing-upgrade-574235 --memory=2200 --driver=docker  --container-runtime=docker: (36.950791865s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-574235
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-574235: (10.434408649s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-574235
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-574235 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-574235 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.484168735s)
helpers_test.go:175: Cleaning up "missing-upgrade-574235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-574235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-574235: (2.719782299s)
--- PASS: TestMissingContainerUpgrade (113.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (102.56547ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-504624] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-504624 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-504624 --driver=docker  --container-runtime=docker: (43.628020518s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-504624 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --driver=docker  --container-runtime=docker: (17.232331924s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-504624 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-504624 status -o json: exit status 2 (381.740661ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-504624","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-504624
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-504624: (1.697150725s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --driver=docker  --container-runtime=docker
E0927 01:06:38.190063    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-504624 --no-kubernetes --driver=docker  --container-runtime=docker: (9.991791398s)
--- PASS: TestNoKubernetes/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-504624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-504624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.153187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-504624
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-504624: (1.277863456s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-504624 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-504624 --driver=docker  --container-runtime=docker: (8.356405144s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-504624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-504624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.253764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.774741389 start -p stopped-upgrade-922419 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.774741389 start -p stopped-upgrade-922419 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m22.083955076s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.774741389 -p stopped-upgrade-922419 stop
E0927 01:10:08.013361    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.019827    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.031290    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.052725    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.094089    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.175599    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.337081    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:08.658551    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:09.300583    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.774741389 -p stopped-upgrade-922419 stop: (10.783321051s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-922419 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 01:10:10.582967    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:13.144302    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:18.266446    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:10:28.507778    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-922419 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.570183173s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-922419
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-922419: (1.306129022s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestPause/serial/Start (45.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-900873 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-900873 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (45.898773646s)
--- PASS: TestPause/serial/Start (45.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-900873 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 01:15:08.013064    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-900873 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.876782352s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-900873 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-900873 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-900873 --output=json --layout=cluster: exit status 2 (360.879128ms)

                                                
                                                
-- stdout --
	{"Name":"pause-900873","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-900873","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-900873 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-900873 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-900873 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-900873 --alsologtostderr -v=5: (2.227438618s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-900873
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-900873: exit status 1 (15.650332ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-900873: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0927 01:15:35.715809    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m15.044688751s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-630490 "pgrep -a kubelet"
I0927 01:16:31.857960    7598 config.go:182] Loaded profile config "auto-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6wt5b" [1795fab2-24e6-467f-8fbd-43f05a264040] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6wt5b" [1795fab2-24e6-467f-8fbd-43f05a264040] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003747803s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m15.262971852s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.053842906s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rz7fh" [a45811bf-f695-4e38-aeef-f28ded8a985c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003954547s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-630490 "pgrep -a kubelet"
I0927 01:18:24.115090    7598 config.go:182] Loaded profile config "kindnet-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m5m5b" [bb43aec1-ccbc-4d46-95f0-b1a434127e1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m5m5b" [bb43aec1-ccbc-4d46-95f0-b1a434127e1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009699593s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0927 01:18:35.120841    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (59.785804192s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l2f76" [a86dd649-eca0-4c5f-87a2-17774327a5ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004684287s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-630490 "pgrep -a kubelet"
I0927 01:19:29.365298    7598 config.go:182] Loaded profile config "calico-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-630490 replace --force -f testdata/netcat-deployment.yaml
I0927 01:19:29.813061    7598 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5qgzx" [ca44c6a1-9be7-4d25-9909-81e38ea0b08b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5qgzx" [ca44c6a1-9be7-4d25-9909-81e38ea0b08b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004350192s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-630490 "pgrep -a kubelet"
I0927 01:20:00.272003    7598 config.go:182] Loaded profile config "custom-flannel-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f589v" [f1f20388-06b2-4710-a7cb-4c632b9c21fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f589v" [f1f20388-06b2-4710-a7cb-4c632b9c21fa] Running
E0927 01:20:08.013111    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003409209s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (52.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (52.938251648s)
--- PASS: TestNetworkPlugins/group/false/Start (52.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (49.887075444s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-630490 "pgrep -a kubelet"
I0927 01:21:02.127525    7598 config.go:182] Loaded profile config "false-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-630490 replace --force -f testdata/netcat-deployment.yaml
I0927 01:21:02.458037    7598 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c8vm5" [a917f02f-0080-46b6-be21-dbc9e9c53367] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c8vm5" [a917f02f-0080-46b6-be21-dbc9e9c53367] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004192405s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-630490 "pgrep -a kubelet"
I0927 01:21:28.193930    7598 config.go:182] Loaded profile config "enable-default-cni-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vsrz2" [6bf02c0f-bad1-4057-b78c-09ff678c99da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:21:32.116998    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.123227    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.134733    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.156809    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.200631    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.282586    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.444691    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:32.769248    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vsrz2" [6bf02c0f-bad1-4057-b78c-09ff678c99da] Running
E0927 01:21:34.692199    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003706694s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0927 01:21:37.253829    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m2.75049883s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0927 01:22:13.101673    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (51.050510873s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c8d9d" [2ee65fe6-7bcf-476a-bb70-f8e886007f38] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003843907s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-630490 "pgrep -a kubelet"
I0927 01:22:44.669670    7598 config.go:182] Loaded profile config "flannel-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lp9tj" [3b813c24-571a-43dc-a347-60079e4cd760] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lp9tj" [3b813c24-571a-43dc-a347-60079e4cd760] Running
E0927 01:22:52.044139    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:22:54.064010    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004485739s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-630490 "pgrep -a kubelet"
I0927 01:22:55.947773    7598 config.go:182] Loaded profile config "bridge-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vsrbc" [26bfb849-b9e9-46b6-8e7b-34ff5a79fb89] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vsrbc" [26bfb849-b9e9-46b6-8e7b-34ff5a79fb89] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003392706s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (48.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0927 01:23:17.787334    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:17.793711    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:17.805043    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:17.826836    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:17.868279    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:17.949925    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:18.112239    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:18.192254    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:18.436203    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:19.079581    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:20.361616    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:22.923466    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:28.045821    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-630490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (48.884706414s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (48.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-210508 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0927 01:23:35.120946    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:38.287625    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:58.768935    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-210508 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m34.698782921s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-630490 "pgrep -a kubelet"
I0927 01:24:06.986650    7598 config.go:182] Loaded profile config "kubenet-630490": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-630490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qwfdl" [bd933fc4-6ee5-4879-a370-397da9f03143] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qwfdl" [bd933fc4-6ee5-4879-a370-397da9f03143] Running
E0927 01:24:15.985683    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003594456s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-630490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-630490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-433271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:24:43.444310    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.643998    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.650300    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.661626    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.682986    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.724486    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.805836    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:00.967189    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:01.289369    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:01.930735    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:03.212575    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:03.925642    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:05.774629    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:08.013297    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:10.896609    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:21.138273    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:41.619903    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:44.887661    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-433271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.859530008s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-433271 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [752bcf9b-5f32-4d4a-af0e-359b55113d21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0927 01:26:01.653279    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.416743    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.423064    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.434363    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.455710    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.500218    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.582272    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:02.743538    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:03.065421    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [752bcf9b-5f32-4d4a-af0e-359b55113d21] Running
E0927 01:26:03.707779    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:04.989584    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004251015s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-433271 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-210508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8bc9c518-9ee2-47aa-931e-83f6f3bbd2af] Pending
helpers_test.go:344: "busybox" [8bc9c518-9ee2-47aa-931e-83f6f3bbd2af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0927 01:26:07.551273    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8bc9c518-9ee2-47aa-931e-83f6f3bbd2af] Running
E0927 01:26:12.673238    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003128725s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-210508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-433271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-433271 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-433271 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-433271 --alsologtostderr -v=3: (10.919765372s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-210508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-210508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-210508 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-210508 --alsologtostderr -v=3: (11.14475125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-433271 -n embed-certs-433271
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-433271 -n embed-certs-433271: exit status 7 (66.087585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-433271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-433271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:26:22.581904    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:22.915642    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-433271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m58.520730429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-433271 -n embed-certs-433271
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-210508 -n old-k8s-version-210508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-210508 -n old-k8s-version-210508: exit status 7 (87.808948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-210508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (147.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-210508 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0927 01:26:28.506933    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.514384    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.527061    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.548484    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.590403    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.680235    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:28.842017    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:29.163999    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:29.806117    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:31.077095    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:31.087875    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:32.116589    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:33.650051    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:38.772185    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:43.397438    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:49.014404    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:59.827489    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:06.809132    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:09.495646    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:24.359634    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:35.113573    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.287370    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.293719    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.305088    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.326459    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.367949    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.449408    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.611015    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.932429    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:39.574494    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:40.856244    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:43.418143    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:44.503619    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:48.540161    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:50.457765    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:52.043443    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.201941    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.208581    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.219989    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.241346    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.283017    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.364390    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.525893    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:56.847637    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:57.489755    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:58.771855    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:58.781544    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:01.333697    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:06.455923    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:16.697580    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:17.787148    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:19.262931    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:35.120216    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:37.178937    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:45.495375    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:46.281232    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-210508 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m27.322715818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-210508 -n old-k8s-version-210508
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (147.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wr2nm" [e3de5bd2-bbc4-4a77-b12a-a42861a21d48] Running
E0927 01:29:00.224884    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003685344s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wr2nm" [e3de5bd2-bbc4-4a77-b12a-a42861a21d48] Running
E0927 01:29:07.312249    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.318597    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.330054    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.351409    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.392886    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.474341    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.635790    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:07.957596    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00365484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-210508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-210508 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-210508 --alsologtostderr -v=1
E0927 01:29:08.599976    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-210508 -n old-k8s-version-210508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-210508 -n old-k8s-version-210508: exit status 2 (341.246679ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-210508 -n old-k8s-version-210508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-210508 -n old-k8s-version-210508: exit status 2 (306.408936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-210508 --alsologtostderr -v=1
E0927 01:29:09.881339    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-210508 -n old-k8s-version-210508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-210508 -n old-k8s-version-210508
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-461256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:29:17.563885    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:18.141182    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:22.950526    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:27.805500    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:48.289469    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:29:50.651033    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:30:00.643994    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-461256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (52.271787134s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-461256 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea6064a4-5e32-444e-a652-04b195114fa9] Pending
helpers_test.go:344: "busybox" [ea6064a4-5e32-444e-a652-04b195114fa9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0927 01:30:08.013467    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ea6064a4-5e32-444e-a652-04b195114fa9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003960312s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-461256 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-461256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-461256 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-461256 --alsologtostderr -v=3
E0927 01:30:22.147163    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-461256 --alsologtostderr -v=3: (10.855026213s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-461256 -n no-preload-461256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-461256 -n no-preload-461256: exit status 7 (72.125832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-461256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-461256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:30:28.346217    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:30:29.250879    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:30:40.062970    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:02.417049    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.823795    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.830211    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.841599    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.863041    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.904476    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.985820    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:06.147307    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:06.469086    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:07.111319    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:08.392796    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:10.954966    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:16.076867    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-461256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m48.693550407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-461256 -n no-preload-461256
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mr6tn" [ee325911-2de0-47c4-9ec4-2d14cc86e36d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003498889s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mr6tn" [ee325911-2de0-47c4-9ec4-2d14cc86e36d] Running
E0927 01:31:26.318492    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:28.506615    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:30.123183    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003508745s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-433271 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-433271 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-433271 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-433271 -n embed-certs-433271
E0927 01:31:32.116924    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-433271 -n embed-certs-433271: exit status 2 (324.119698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-433271 -n embed-certs-433271
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-433271 -n embed-certs-433271: exit status 2 (308.690718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-433271 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-433271 -n embed-certs-433271
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-433271 -n embed-certs-433271
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-424062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:31:46.800312    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:51.173115    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:56.222134    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-424062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (48.690402407s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-424062 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd86c862-dc89-4233-b807-3338eac6be30] Pending
helpers_test.go:344: "busybox" [dd86c862-dc89-4233-b807-3338eac6be30] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0927 01:32:27.761977    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [dd86c862-dc89-4233-b807-3338eac6be30] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00351906s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-424062 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-424062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-424062 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-424062 --alsologtostderr -v=3
E0927 01:32:38.287842    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-424062 --alsologtostderr -v=3: (10.923617169s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062: exit status 7 (69.33575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-424062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-424062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:32:52.043427    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/functional-787765/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:56.202403    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:05.989291    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:17.787119    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kindnet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:23.904720    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/bridge-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.120513    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:49.683735    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:34:07.312494    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:34:22.950975    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/calico-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:34:35.015306    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/kubenet-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:00.644244    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/custom-flannel-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:08.012465    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/skaffold-655916/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-424062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m35.765050103s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-89rzc" [b821de24-31c5-4dc6-ada0-dec69c59af39] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004084486s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-89rzc" [b821de24-31c5-4dc6-ada0-dec69c59af39] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004393886s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-461256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-461256 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-461256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-461256 -n no-preload-461256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-461256 -n no-preload-461256: exit status 2 (324.256416ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-461256 -n no-preload-461256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-461256 -n no-preload-461256: exit status 2 (310.440063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-461256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-461256 -n no-preload-461256
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-461256 -n no-preload-461256
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-928850 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:36:02.416800    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/false-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:36:05.823617    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-928850 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.413840159s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-928850 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-928850 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090773616s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-928850 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-928850 --alsologtostderr -v=3: (10.981845928s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928850 -n newest-cni-928850
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928850 -n newest-cni-928850: exit status 7 (73.625627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-928850 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-928850 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 01:36:28.506972    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/enable-default-cni-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:36:32.116589    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/auto-630490/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:36:33.525430    7598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/old-k8s-version-210508/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-928850 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (19.015245258s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928850 -n newest-cni-928850
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-928850 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-928850 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928850 -n newest-cni-928850
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928850 -n newest-cni-928850: exit status 2 (423.810321ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928850 -n newest-cni-928850
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928850 -n newest-cni-928850: exit status 2 (357.078501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-928850 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928850 -n newest-cni-928850
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928850 -n newest-cni-928850
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mdlp6" [8a228f7d-c1e0-4994-9c6e-70b206fa9aaf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003652316s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mdlp6" [8a228f7d-c1e0-4994-9c6e-70b206fa9aaf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003568939s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-424062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-424062 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-424062 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062: exit status 2 (313.457972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062: exit status 2 (297.551067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-424062 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-424062 -n default-k8s-diff-port-424062
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.5s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-686350 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-686350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-686350
--- SKIP: TestDownloadOnlyKic (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-630490 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-630490" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:06:14 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-504624
contexts:
- context:
cluster: NoKubernetes-504624
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:06:14 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-504624
name: NoKubernetes-504624
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-504624
user:
client-certificate: /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/NoKubernetes-504624/client.crt
client-key: /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/NoKubernetes-504624/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-630490

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-630490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-630490"

                                                
                                                
----------------------- debugLogs end: cilium-630490 [took: 3.516320203s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-630490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-630490
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-955358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-955358
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard